Feb 03 06:00:28 crc systemd[1]: Starting Kubernetes Kubelet... Feb 03 06:00:28 crc restorecon[4591]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:28 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 06:00:29 crc restorecon[4591]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 03 06:00:29 crc kubenswrapper[4872]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 06:00:29 crc kubenswrapper[4872]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 03 06:00:29 crc kubenswrapper[4872]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 06:00:29 crc kubenswrapper[4872]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 06:00:29 crc kubenswrapper[4872]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 03 06:00:29 crc kubenswrapper[4872]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.848476 4872 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853637 4872 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853669 4872 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853679 4872 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853714 4872 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853725 4872 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853733 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853742 4872 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853750 4872 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853759 4872 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853767 4872 feature_gate.go:330] unrecognized feature gate: Example Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853775 4872 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853782 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853790 4872 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853798 4872 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853805 4872 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853813 4872 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853820 4872 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853828 4872 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853836 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853847 4872 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853857 4872 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853879 4872 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853888 4872 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853901 4872 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853912 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853921 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853929 4872 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853937 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853949 4872 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853958 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853967 4872 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853976 4872 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853986 4872 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.853994 4872 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854003 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854011 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854019 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854027 4872 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854034 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854042 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854050 4872 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854058 4872 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854065 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854074 4872 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854082 4872 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854090 4872 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854097 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854105 4872 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854115 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854123 4872 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854131 4872 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854138 4872 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854146 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854154 4872 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854165 4872 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854176 4872 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854185 4872 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854193 4872 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854202 4872 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854209 4872 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854217 4872 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854225 4872 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854232 4872 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854240 4872 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854247 4872 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854255 4872 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854263 4872 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854270 4872 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854278 4872 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854286 4872 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.854294 4872 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856056 4872 flags.go:64] FLAG: --address="0.0.0.0" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856081 4872 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856098 4872 flags.go:64] FLAG: --anonymous-auth="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856110 4872 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856124 4872 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856133 4872 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856146 4872 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856157 4872 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856167 4872 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856177 4872 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856187 4872 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856197 4872 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856206 4872 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856215 4872 flags.go:64] FLAG: --cgroup-root="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856224 4872 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856233 4872 flags.go:64] FLAG: --client-ca-file="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856243 4872 flags.go:64] FLAG: --cloud-config="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856252 4872 flags.go:64] FLAG: --cloud-provider="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856261 4872 flags.go:64] FLAG: --cluster-dns="[]" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856273 4872 flags.go:64] FLAG: --cluster-domain="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856282 4872 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856291 4872 flags.go:64] FLAG: --config-dir="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856300 4872 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856312 4872 flags.go:64] FLAG: --container-log-max-files="5" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856325 4872 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856334 4872 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856344 4872 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856353 4872 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856362 4872 flags.go:64] FLAG: --contention-profiling="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856371 4872 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856380 4872 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856390 4872 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856399 4872 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856410 4872 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856419 4872 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856428 4872 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856437 4872 flags.go:64] FLAG: --enable-load-reader="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856446 4872 flags.go:64] FLAG: --enable-server="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856455 4872 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856466 4872 flags.go:64] FLAG: --event-burst="100" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856477 4872 flags.go:64] FLAG: --event-qps="50" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856487 4872 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856496 4872 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856505 4872 flags.go:64] FLAG: --eviction-hard="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856515 4872 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856525 4872 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856534 4872 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856543 4872 flags.go:64] FLAG: --eviction-soft="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856552 4872 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856561 4872 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856570 4872 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856580 4872 flags.go:64] FLAG: --experimental-mounter-path="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856589 4872 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856598 4872 flags.go:64] FLAG: --fail-swap-on="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856607 4872 flags.go:64] FLAG: --feature-gates="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856619 4872 flags.go:64] FLAG: --file-check-frequency="20s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856629 4872 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856639 4872 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856648 4872 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856657 4872 flags.go:64] FLAG: --healthz-port="10248" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856667 4872 flags.go:64] FLAG: --help="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856676 4872 flags.go:64] FLAG: --hostname-override="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856714 4872 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856725 4872 flags.go:64] FLAG: --http-check-frequency="20s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856735 4872 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856744 4872 flags.go:64] FLAG: --image-credential-provider-config="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856754 4872 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856763 4872 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856773 4872 flags.go:64] FLAG: --image-service-endpoint="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856782 4872 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856792 4872 flags.go:64] FLAG: --kube-api-burst="100" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856800 4872 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856810 4872 flags.go:64] FLAG: --kube-api-qps="50" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856819 4872 flags.go:64] FLAG: --kube-reserved="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856828 4872 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856837 4872 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856847 4872 flags.go:64] FLAG: --kubelet-cgroups="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856856 4872 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856865 4872 flags.go:64] FLAG: --lock-file="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856873 4872 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856883 4872 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856891 4872 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856905 4872 flags.go:64] FLAG: --log-json-split-stream="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856915 4872 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856923 4872 flags.go:64] FLAG: --log-text-split-stream="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856932 4872 flags.go:64] FLAG: --logging-format="text" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856941 4872 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856953 4872 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856962 4872 flags.go:64] FLAG: --manifest-url="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856971 4872 flags.go:64] FLAG: --manifest-url-header="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856982 4872 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.856991 4872 flags.go:64] FLAG: --max-open-files="1000000" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857002 4872 flags.go:64] FLAG: --max-pods="110" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857012 4872 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857021 4872 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857030 4872 flags.go:64] FLAG: --memory-manager-policy="None" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857039 4872 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857048 4872 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857057 4872 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857066 4872 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857085 4872 flags.go:64] FLAG: --node-status-max-images="50" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857094 4872 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857104 4872 flags.go:64] FLAG: --oom-score-adj="-999" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857113 4872 flags.go:64] FLAG: --pod-cidr="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857122 4872 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857135 4872 flags.go:64] FLAG: --pod-manifest-path="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857144 4872 flags.go:64] FLAG: --pod-max-pids="-1" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857154 4872 flags.go:64] FLAG: --pods-per-core="0" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857163 4872 flags.go:64] FLAG: --port="10250" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857173 4872 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857182 4872 flags.go:64] FLAG: --provider-id="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857190 4872 flags.go:64] FLAG: --qos-reserved="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857201 4872 flags.go:64] FLAG: --read-only-port="10255" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857210 4872 flags.go:64] FLAG: --register-node="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857219 4872 flags.go:64] FLAG: --register-schedulable="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857228 4872 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857243 4872 flags.go:64] FLAG: --registry-burst="10" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857252 4872 flags.go:64] FLAG: --registry-qps="5" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857261 4872 flags.go:64] FLAG: --reserved-cpus="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857271 4872 flags.go:64] FLAG: --reserved-memory="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857283 4872 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857292 4872 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857301 4872 flags.go:64] FLAG: --rotate-certificates="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857310 4872 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857319 4872 flags.go:64] FLAG: --runonce="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857328 4872 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857338 4872 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857347 4872 flags.go:64] FLAG: --seccomp-default="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857356 4872 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857390 4872 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857400 4872 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857409 4872 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857419 4872 flags.go:64] FLAG: --storage-driver-password="root" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857428 4872 flags.go:64] FLAG: --storage-driver-secure="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857437 4872 flags.go:64] FLAG: --storage-driver-table="stats" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857446 4872 flags.go:64] FLAG: --storage-driver-user="root" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857455 4872 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857464 4872 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857473 4872 flags.go:64] FLAG: --system-cgroups="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857482 4872 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857496 4872 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857506 4872 flags.go:64] FLAG: --tls-cert-file="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857514 4872 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857534 4872 flags.go:64] FLAG: --tls-min-version="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857543 4872 flags.go:64] FLAG: --tls-private-key-file="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857551 4872 flags.go:64] FLAG: --topology-manager-policy="none" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857561 4872 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857571 4872 flags.go:64] FLAG: --topology-manager-scope="container" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857581 4872 flags.go:64] FLAG: --v="2" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857593 4872 flags.go:64] FLAG: --version="false" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857604 4872 flags.go:64] FLAG: --vmodule="" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857615 4872 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.857625 4872 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857876 4872 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857887 4872 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857897 4872 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857905 4872 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857914 4872 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857924 4872 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857933 4872 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857942 4872 feature_gate.go:330] unrecognized feature gate: Example Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857950 4872 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857959 4872 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857968 4872 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857976 4872 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857984 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.857993 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858000 4872 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858008 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858016 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858024 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858031 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858039 4872 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858047 4872 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858055 4872 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858064 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858071 4872 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858079 4872 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858087 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858095 4872 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858103 4872 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858110 4872 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858118 4872 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858129 4872 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858140 4872 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858150 4872 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858159 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858169 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858177 4872 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858185 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858195 4872 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858206 4872 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858214 4872 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858224 4872 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858233 4872 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858242 4872 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858251 4872 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858260 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858268 4872 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858277 4872 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858285 4872 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858294 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858302 4872 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858311 4872 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858319 4872 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858327 4872 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858335 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858343 4872 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858351 4872 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858359 4872 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858367 4872 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858375 4872 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858383 4872 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858393 4872 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858402 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858410 4872 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858419 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858427 4872 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858434 4872 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858443 4872 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858451 4872 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858459 4872 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858466 4872 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.858477 4872 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.858501 4872 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.873183 4872 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.873232 4872 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873416 4872 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873444 4872 feature_gate.go:330] unrecognized feature gate: Example Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873453 4872 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873463 4872 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873471 4872 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873479 4872 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873487 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873495 4872 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873506 4872 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873521 4872 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873530 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873538 4872 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873547 4872 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873556 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873564 4872 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873572 4872 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873580 4872 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873588 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873595 4872 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873603 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873611 4872 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873619 4872 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873627 4872 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873634 4872 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873642 4872 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873650 4872 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873660 4872 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873670 4872 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873679 4872 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873727 4872 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873738 4872 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873747 4872 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873755 4872 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873763 4872 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873774 4872 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873783 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873792 4872 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873800 4872 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873808 4872 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873816 4872 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873825 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873835 4872 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873845 4872 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873854 4872 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873862 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873870 4872 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873878 4872 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873886 4872 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873894 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873901 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873909 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873917 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873925 4872 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873933 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873940 4872 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873948 4872 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873955 4872 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873963 4872 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873970 4872 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873978 4872 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873987 4872 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.873994 4872 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874002 4872 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874010 4872 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874018 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874025 4872 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874033 4872 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874041 4872 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874048 4872 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874056 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874064 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.874078 4872 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874297 4872 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874312 4872 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874321 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874330 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874339 4872 feature_gate.go:330] unrecognized feature gate: Example Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874347 4872 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874356 4872 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874364 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874374 4872 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874384 4872 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874392 4872 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874400 4872 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874410 4872 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874423 4872 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874439 4872 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874450 4872 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874460 4872 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874470 4872 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874480 4872 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874491 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874501 4872 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874511 4872 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874521 4872 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874532 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874542 4872 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874553 4872 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874562 4872 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874576 4872 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874588 4872 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874599 4872 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874609 4872 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874619 4872 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874629 4872 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874639 4872 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874650 4872 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874661 4872 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874671 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874681 4872 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874746 4872 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874757 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874767 4872 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874778 4872 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874787 4872 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874796 4872 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874806 4872 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874815 4872 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874827 4872 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874835 4872 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874842 4872 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874850 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874858 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874866 4872 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874873 4872 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874881 4872 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874889 4872 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874897 4872 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874905 4872 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874912 4872 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874920 4872 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874928 4872 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874936 4872 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874946 4872 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874955 4872 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874966 4872 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874975 4872 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874985 4872 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.874993 4872 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.875001 4872 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.875008 4872 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.875016 4872 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 06:00:29 crc kubenswrapper[4872]: W0203 06:00:29.875026 4872 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.875039 4872 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.875300 4872 server.go:940] "Client rotation is on, will bootstrap in background" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.880982 4872 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.881103 4872 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.885171 4872 server.go:997] "Starting client certificate rotation" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.885217 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.886336 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-22 14:37:48.242456257 +0000 UTC Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.886448 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.909849 4872 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.915068 4872 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 06:00:29 crc kubenswrapper[4872]: E0203 06:00:29.915838 4872 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.934884 4872 log.go:25] "Validated CRI v1 runtime API" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.975673 4872 log.go:25] "Validated CRI v1 image API" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.979320 4872 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.985570 4872 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-03-05-54-17-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 03 06:00:29 crc kubenswrapper[4872]: I0203 06:00:29.985623 4872 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.005478 4872 manager.go:217] Machine: {Timestamp:2026-02-03 06:00:30.002995638 +0000 UTC m=+0.585687112 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54 BootID:4d942662-8847-4f84-a334-73ce9180bb14 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0d:76:42 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0d:76:42 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ab:eb:d1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:59:40:1c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a7:53:b8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c7:c7:79 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:93:4d:49:7b:f6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:38:11:af:06:e3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.005806 4872 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.006069 4872 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.006680 4872 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.007041 4872 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.007100 4872 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.007427 4872 topology_manager.go:138] "Creating topology manager with none policy" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.007448 4872 container_manager_linux.go:303] "Creating device plugin manager" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.008045 4872 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.008105 4872 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.008339 4872 state_mem.go:36] "Initialized new in-memory state store" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.008488 4872 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.012029 4872 kubelet.go:418] "Attempting to sync node with API server" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.012078 4872 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.012135 4872 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.012164 4872 kubelet.go:324] "Adding apiserver pod source" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.012214 4872 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.016767 4872 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.016901 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.017014 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.017173 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.017246 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.017791 4872 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.025604 4872 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027766 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027810 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027827 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027842 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027865 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027878 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027891 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027913 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027929 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027944 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027964 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.027978 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.029062 4872 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.029878 4872 server.go:1280] "Started kubelet" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.030037 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.030803 4872 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 03 06:00:30 crc systemd[1]: Started Kubernetes Kubelet. Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.031063 4872 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.032604 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.032858 4872 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.033016 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:51:43.564657233 +0000 UTC Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.033141 4872 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.033168 4872 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.034129 4872 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.033076 4872 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.034571 4872 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.035383 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.035491 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.036046 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.037340 4872 server.go:460] "Adding debug handlers to kubelet server" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.043327 4872 factory.go:55] Registering systemd factory Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.043395 4872 factory.go:221] Registration of the systemd container factory successfully Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.045048 4872 factory.go:153] Registering CRI-O factory Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.045086 4872 factory.go:221] Registration of the crio container factory successfully Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.045203 4872 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.045236 4872 factory.go:103] Registering Raw factory Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.045263 4872 manager.go:1196] Started watching for new ooms in manager Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.046244 4872 manager.go:319] Starting recovery of all containers Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.043322 4872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890a724cb677dc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 06:00:30.029823433 +0000 UTC m=+0.612514887,LastTimestamp:2026-02-03 06:00:30.029823433 +0000 UTC m=+0.612514887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055460 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055574 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055598 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055619 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055766 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055786 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055822 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055845 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055926 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055949 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055971 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.055997 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056015 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056112 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056133 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056159 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056182 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056201 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056228 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056247 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056272 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056295 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056376 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056403 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056422 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056445 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056470 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056497 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056517 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056541 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056618 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056642 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056662 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056681 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056734 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056807 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056836 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056886 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.056908 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067068 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067109 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067130 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067150 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067181 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067200 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067260 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067296 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067316 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067341 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067359 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067377 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067443 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067527 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067571 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067601 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067625 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067735 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067776 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067796 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067849 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067879 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.067997 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068027 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068081 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068167 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068196 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068255 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068283 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068323 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068363 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068412 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068432 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068451 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068515 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068536 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068560 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068578 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068623 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068648 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068730 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068760 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068779 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068800 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068823 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068870 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068921 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068939 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068958 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.068980 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069000 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069022 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069040 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069058 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069117 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069136 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069159 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069200 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069252 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069307 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.069326 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070204 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070244 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070312 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070372 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070415 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070485 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070520 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070585 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070646 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070680 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070746 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070778 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070837 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070867 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070935 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.070964 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071019 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071048 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071103 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071127 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071157 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071212 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071243 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071302 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071325 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071387 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071408 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071436 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071493 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071515 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071575 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071742 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071768 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071831 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071853 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.071952 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072008 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072051 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072188 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072216 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072277 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072301 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072321 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072381 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072403 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072462 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072506 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072560 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072590 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072638 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072666 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072726 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072748 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072772 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072932 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.072959 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.073013 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.073035 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.073138 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.073881 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.075251 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.075390 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.075520 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.075639 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.075802 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.075931 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.076052 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.078632 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.078668 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.078728 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.078752 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.078796 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.078812 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.078835 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.078998 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079022 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079039 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079089 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079128 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079173 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079197 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079215 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079231 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079252 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079269 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079289 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079305 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079372 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079395 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079411 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079427 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079449 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079471 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079520 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079580 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079622 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079645 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079755 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079865 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.079883 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084146 4872 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084184 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084251 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084425 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084468 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084548 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084570 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084611 4872 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084638 4872 reconstruct.go:97] "Volume reconstruction finished" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.084652 4872 reconciler.go:26] "Reconciler: start to sync state" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.089420 4872 manager.go:324] Recovery completed Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.102358 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.106766 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.106838 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.106859 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.110750 4872 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.110796 4872 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.110906 4872 state_mem.go:36] "Initialized new in-memory state store" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.118037 4872 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.121346 4872 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.121393 4872 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.121425 4872 kubelet.go:2335] "Starting kubelet main sync loop" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.121476 4872 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.122099 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.122298 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.135347 4872 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.138586 4872 policy_none.go:49] "None policy: Start" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.139657 4872 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.139715 4872 state_mem.go:35] "Initializing new in-memory state store" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.203736 4872 manager.go:334] "Starting Device Plugin manager" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.203790 4872 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.203808 4872 server.go:79] "Starting device plugin registration server" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.204262 4872 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.204283 4872 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.204811 4872 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.204992 4872 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.205004 4872 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.215841 4872 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.222088 4872 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.222219 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.223491 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.223524 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.223536 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.223672 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.224144 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.224248 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.224835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.224860 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.224870 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.225106 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.225238 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.225287 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226007 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226030 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226056 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226625 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226844 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226928 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.226998 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.227351 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.227438 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.229174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.229228 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.229265 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.229461 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.229724 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.229765 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.230026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.230052 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.230066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.230281 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.230303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.230315 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.230464 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.230490 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.231082 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.231126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.231142 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.231320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.231342 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.231353 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.237856 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287393 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287479 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287517 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287552 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287588 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287643 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287673 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287730 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287805 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287836 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287868 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287899 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287927 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287955 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.287985 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.304443 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.305630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.305666 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.305678 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.305721 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.306150 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.388968 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389039 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389087 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389145 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389179 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389210 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389236 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389279 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389284 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389245 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389302 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389350 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389378 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389353 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389409 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389439 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389221 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389466 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389495 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389495 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389555 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389565 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389599 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389498 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389606 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389561 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389599 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389680 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389640 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.389843 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.506526 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.508465 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.508511 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.508523 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.508545 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.508901 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.562366 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.571494 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.592213 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.614321 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.618787 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.619350 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9388e1c34a13e8501ae8682d2b06df034fbc4e990fed7eb6e67044853a6905d6 WatchSource:0}: Error finding container 9388e1c34a13e8501ae8682d2b06df034fbc4e990fed7eb6e67044853a6905d6: Status 404 returned error can't find the container with id 9388e1c34a13e8501ae8682d2b06df034fbc4e990fed7eb6e67044853a6905d6 Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.622372 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-612945896cdcd0d470ca341de9080f6bbc0fd2b1e70a093a0b4df21a0e02327e WatchSource:0}: Error finding container 612945896cdcd0d470ca341de9080f6bbc0fd2b1e70a093a0b4df21a0e02327e: Status 404 returned error can't find the container with id 612945896cdcd0d470ca341de9080f6bbc0fd2b1e70a093a0b4df21a0e02327e Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.636646 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-08ca2debcbf535905eea171ec025f52b6da8b59116b7a96309fa81f97de2ff6e WatchSource:0}: Error finding container 08ca2debcbf535905eea171ec025f52b6da8b59116b7a96309fa81f97de2ff6e: Status 404 returned error can't find the container with id 08ca2debcbf535905eea171ec025f52b6da8b59116b7a96309fa81f97de2ff6e Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.638716 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.640322 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8316a9aa878ae622713c604761ef27100c814efbbc060993bac5e1d6e103396c WatchSource:0}: Error finding container 8316a9aa878ae622713c604761ef27100c814efbbc060993bac5e1d6e103396c: Status 404 returned error can't find the container with id 8316a9aa878ae622713c604761ef27100c814efbbc060993bac5e1d6e103396c Feb 03 06:00:30 crc kubenswrapper[4872]: W0203 06:00:30.647920 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c91169a1ddd8689da7ae603bb9a7f31558785913d255aaaab237f6bdb0b986de WatchSource:0}: Error finding container c91169a1ddd8689da7ae603bb9a7f31558785913d255aaaab237f6bdb0b986de: Status 404 returned error can't find the container with id c91169a1ddd8689da7ae603bb9a7f31558785913d255aaaab237f6bdb0b986de Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.909205 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.911362 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.911394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.911419 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:30 crc kubenswrapper[4872]: I0203 06:00:30.911446 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 06:00:30 crc kubenswrapper[4872]: E0203 06:00:30.911784 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.031276 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.033438 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:29:35.036710266 +0000 UTC Feb 03 06:00:31 crc kubenswrapper[4872]: W0203 06:00:31.062118 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:31 crc kubenswrapper[4872]: E0203 06:00:31.062170 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:31 crc kubenswrapper[4872]: W0203 06:00:31.097731 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:31 crc kubenswrapper[4872]: E0203 06:00:31.097785 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.126033 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"08ca2debcbf535905eea171ec025f52b6da8b59116b7a96309fa81f97de2ff6e"} Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.127566 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9388e1c34a13e8501ae8682d2b06df034fbc4e990fed7eb6e67044853a6905d6"} Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.128873 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"612945896cdcd0d470ca341de9080f6bbc0fd2b1e70a093a0b4df21a0e02327e"} Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.129992 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c91169a1ddd8689da7ae603bb9a7f31558785913d255aaaab237f6bdb0b986de"} Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.130758 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8316a9aa878ae622713c604761ef27100c814efbbc060993bac5e1d6e103396c"} Feb 03 06:00:31 crc kubenswrapper[4872]: W0203 06:00:31.435973 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:31 crc kubenswrapper[4872]: E0203 06:00:31.436051 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:31 crc kubenswrapper[4872]: E0203 06:00:31.439922 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Feb 03 06:00:31 crc kubenswrapper[4872]: W0203 06:00:31.463526 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:31 crc kubenswrapper[4872]: E0203 06:00:31.463628 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.712149 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.717297 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.717351 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.717370 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.717405 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 06:00:31 crc kubenswrapper[4872]: E0203 06:00:31.717929 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Feb 03 06:00:31 crc kubenswrapper[4872]: I0203 06:00:31.940993 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 06:00:31 crc kubenswrapper[4872]: E0203 06:00:31.942078 4872 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.031394 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.034483 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:54:40.161786497 +0000 UTC Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.136831 4872 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568" exitCode=0 Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.136922 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568"} Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.137031 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.138127 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.138155 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.138167 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.144781 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe"} Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.144816 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729"} Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.144830 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0"} Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.146531 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf" exitCode=0 Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.146582 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf"} Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.146670 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.147529 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.147552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.147563 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.149127 4872 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9" exitCode=0 Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.149173 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9"} Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.149253 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.149876 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.149897 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.149906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.151796 4872 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d" exitCode=0 Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.151819 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d"} Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.151866 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.153005 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.154161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.154177 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.154186 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.154257 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.154290 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:32 crc kubenswrapper[4872]: I0203 06:00:32.154309 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.030754 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.034952 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:36:24.220858312 +0000 UTC Feb 03 06:00:33 crc kubenswrapper[4872]: E0203 06:00:33.040521 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.156512 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.156614 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.157566 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.157582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.157589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.159954 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.159977 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.159986 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.161760 4872 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253" exitCode=0 Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.161799 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.161873 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.162796 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.162831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.162843 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.167134 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.167178 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"972e64032788cda30595a6d5f4aab1d46317c29f198530b1c1d5aebb20fe2d87"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.168417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.168466 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.168488 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.171428 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.171471 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.171484 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8"} Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.171593 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.172618 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.172669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.172732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:33 crc kubenswrapper[4872]: W0203 06:00:33.232432 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:00:33 crc kubenswrapper[4872]: E0203 06:00:33.232567 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Feb 03 06:00:33 crc kubenswrapper[4872]: E0203 06:00:33.313663 4872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890a724cb677dc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 06:00:30.029823433 +0000 UTC m=+0.612514887,LastTimestamp:2026-02-03 06:00:30.029823433 +0000 UTC m=+0.612514887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.318965 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.320070 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.320095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.320103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:33 crc kubenswrapper[4872]: I0203 06:00:33.320121 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 06:00:33 crc kubenswrapper[4872]: E0203 06:00:33.320421 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.035772 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:36:02.397349667 +0000 UTC Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.181264 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106"} Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.181299 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.181331 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077"} Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.182574 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.182649 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.182669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.186971 4872 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694" exitCode=0 Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.187035 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694"} Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.187117 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.187210 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.187277 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.187128 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.187588 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.189388 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.189433 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.189451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.189462 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.189498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.189514 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.190381 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.190555 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.190671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.191990 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.192048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:34 crc kubenswrapper[4872]: I0203 06:00:34.192072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.037023 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:38:04.98412703 +0000 UTC Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.194406 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e4558a6aad40f4edeecc1ee4cf08b24c8f36a10f1f719c0b751f703e20022c2"} Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.194457 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ebac1c8b41abb507c1c46c2c41db89d11df7de6a69743e384c5b2f605a223b2e"} Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.194475 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"361b3561576fcd438576ea10f1626ee87679dd60ad84ca855e148f79dd4a6a5e"} Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.194676 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.194766 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.197022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.197067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.197085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.792066 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.792458 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.794841 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.794903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:35 crc kubenswrapper[4872]: I0203 06:00:35.794921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.038092 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:21:17.31743821 +0000 UTC Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.057303 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.210466 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a41df6e59e9659b116572130787f5ac782ba462c7a7335dc09de90fd76e7348"} Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.210539 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"647fd7b3133db8861de3ba448d648401cc3745d1d43c35809a5b33845077175c"} Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.210805 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.212218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.212274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.212319 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.222024 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.222174 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.222278 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.223608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.223682 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.223747 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.521109 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.523413 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.523506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.523526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.523581 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.646502 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.646856 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.648818 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.648882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:36 crc kubenswrapper[4872]: I0203 06:00:36.648906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.039145 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:30:09.008115924 +0000 UTC Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.215864 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.217748 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.217845 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.217902 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.467912 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.468122 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.468180 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.474793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.474864 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.474887 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.478629 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.478888 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.480131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.480169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:37 crc kubenswrapper[4872]: I0203 06:00:37.480187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.039539 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:35:24.247078167 +0000 UTC Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.399626 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.400063 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.402118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.402181 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.402225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.445650 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.445945 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.447525 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.447581 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.447599 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.793187 4872 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 06:00:38 crc kubenswrapper[4872]: I0203 06:00:38.793988 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:00:39 crc kubenswrapper[4872]: I0203 06:00:39.039896 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:15:19.692152089 +0000 UTC Feb 03 06:00:39 crc kubenswrapper[4872]: I0203 06:00:39.239184 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:39 crc kubenswrapper[4872]: I0203 06:00:39.239449 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:39 crc kubenswrapper[4872]: I0203 06:00:39.242674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:39 crc kubenswrapper[4872]: I0203 06:00:39.242787 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:39 crc kubenswrapper[4872]: I0203 06:00:39.242810 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:39 crc kubenswrapper[4872]: I0203 06:00:39.251304 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:39 crc kubenswrapper[4872]: I0203 06:00:39.426317 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:40 crc kubenswrapper[4872]: I0203 06:00:40.040456 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:06:30.970101556 +0000 UTC Feb 03 06:00:40 crc kubenswrapper[4872]: E0203 06:00:40.216251 4872 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 06:00:40 crc kubenswrapper[4872]: I0203 06:00:40.224311 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:40 crc kubenswrapper[4872]: I0203 06:00:40.225407 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:40 crc kubenswrapper[4872]: I0203 06:00:40.225447 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:40 crc kubenswrapper[4872]: I0203 06:00:40.225463 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:41 crc kubenswrapper[4872]: I0203 06:00:41.041439 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:21:01.677146163 +0000 UTC Feb 03 06:00:41 crc kubenswrapper[4872]: I0203 06:00:41.227902 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:41 crc kubenswrapper[4872]: I0203 06:00:41.229597 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:41 crc kubenswrapper[4872]: I0203 06:00:41.229655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:41 crc kubenswrapper[4872]: I0203 06:00:41.229672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:41 crc kubenswrapper[4872]: I0203 06:00:41.234257 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:42 crc kubenswrapper[4872]: I0203 06:00:42.042572 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:21:11.763289289 +0000 UTC Feb 03 06:00:42 crc kubenswrapper[4872]: I0203 06:00:42.231459 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:42 crc kubenswrapper[4872]: I0203 06:00:42.232838 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:42 crc kubenswrapper[4872]: I0203 06:00:42.232912 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:42 crc kubenswrapper[4872]: I0203 06:00:42.232934 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:43 crc kubenswrapper[4872]: I0203 06:00:43.043519 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:28:08.648493385 +0000 UTC Feb 03 06:00:43 crc kubenswrapper[4872]: I0203 06:00:43.085258 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 03 06:00:43 crc kubenswrapper[4872]: I0203 06:00:43.085533 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 03 06:00:43 crc kubenswrapper[4872]: W0203 06:00:43.834818 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 03 06:00:43 crc kubenswrapper[4872]: I0203 06:00:43.834954 4872 trace.go:236] Trace[1646885902]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 06:00:33.833) (total time: 10001ms): Feb 03 06:00:43 crc kubenswrapper[4872]: Trace[1646885902]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:00:43.834) Feb 03 06:00:43 crc kubenswrapper[4872]: Trace[1646885902]: [10.001362751s] [10.001362751s] END Feb 03 06:00:43 crc kubenswrapper[4872]: E0203 06:00:43.834985 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 03 06:00:43 crc kubenswrapper[4872]: W0203 06:00:43.868516 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 03 06:00:43 crc kubenswrapper[4872]: I0203 06:00:43.868611 4872 trace.go:236] Trace[1800999200]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 06:00:33.866) (total time: 10001ms): Feb 03 06:00:43 crc kubenswrapper[4872]: Trace[1800999200]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:00:43.868) Feb 03 06:00:43 crc kubenswrapper[4872]: Trace[1800999200]: [10.001824342s] [10.001824342s] END Feb 03 06:00:43 crc kubenswrapper[4872]: E0203 06:00:43.868632 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.031798 4872 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.045206 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:01:29.203989157 +0000 UTC Feb 03 06:00:44 crc kubenswrapper[4872]: W0203 06:00:44.054722 4872 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.054979 4872 trace.go:236] Trace[1228526718]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 06:00:34.053) (total time: 10001ms): Feb 03 06:00:44 crc kubenswrapper[4872]: Trace[1228526718]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:00:44.054) Feb 03 06:00:44 crc kubenswrapper[4872]: Trace[1228526718]: [10.001878443s] [10.001878443s] END Feb 03 06:00:44 crc kubenswrapper[4872]: E0203 06:00:44.055244 4872 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.265753 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.266065 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.273769 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.273827 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.576033 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.576582 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.578014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.578197 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.578321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:44 crc kubenswrapper[4872]: I0203 06:00:44.628117 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 03 06:00:45 crc kubenswrapper[4872]: I0203 06:00:45.045730 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:51:25.578773407 +0000 UTC Feb 03 06:00:45 crc kubenswrapper[4872]: I0203 06:00:45.240393 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:45 crc kubenswrapper[4872]: I0203 06:00:45.241563 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:45 crc kubenswrapper[4872]: I0203 06:00:45.241637 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:45 crc kubenswrapper[4872]: I0203 06:00:45.241671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:45 crc kubenswrapper[4872]: I0203 06:00:45.282553 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 03 06:00:46 crc kubenswrapper[4872]: I0203 06:00:46.046261 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:10:36.748207868 +0000 UTC Feb 03 06:00:46 crc kubenswrapper[4872]: I0203 06:00:46.244490 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:46 crc kubenswrapper[4872]: I0203 06:00:46.246116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:46 crc kubenswrapper[4872]: I0203 06:00:46.246185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:46 crc kubenswrapper[4872]: I0203 06:00:46.246205 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:47 crc kubenswrapper[4872]: I0203 06:00:47.047021 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:20:57.80102979 +0000 UTC Feb 03 06:00:47 crc kubenswrapper[4872]: I0203 06:00:47.478493 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:47 crc kubenswrapper[4872]: I0203 06:00:47.478912 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:47 crc kubenswrapper[4872]: I0203 06:00:47.481141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:47 crc kubenswrapper[4872]: I0203 06:00:47.481235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:47 crc kubenswrapper[4872]: I0203 06:00:47.481256 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:47 crc kubenswrapper[4872]: I0203 06:00:47.489157 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.047225 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:07:44.880494134 +0000 UTC Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.250643 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.251825 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.251871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.251889 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.268440 4872 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.736969 4872 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.792864 4872 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.792986 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 06:00:48 crc kubenswrapper[4872]: I0203 06:00:48.887919 4872 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.022228 4872 apiserver.go:52] "Watching apiserver" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.035743 4872 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.036285 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.036875 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.037105 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.037207 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.037332 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.037385 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.037419 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.037534 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.037611 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.037825 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.042898 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.043427 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.046535 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.046925 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.047252 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.047535 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.047845 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.048093 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.048321 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:44:35.366956542 +0000 UTC Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.048424 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.096016 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.116372 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.135959 4872 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.140106 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.158423 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.173341 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.195294 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.210586 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.279492 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.283503 4872 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.285024 4872 trace.go:236] Trace[1193939759]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 06:00:38.501) (total time: 10783ms): Feb 03 06:00:49 crc kubenswrapper[4872]: Trace[1193939759]: ---"Objects listed" error: 10783ms (06:00:49.284) Feb 03 06:00:49 crc kubenswrapper[4872]: Trace[1193939759]: [10.783583327s] [10.783583327s] END Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.285061 4872 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.285154 4872 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.300394 4872 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.317140 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38206->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.317437 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38206->192.168.126.11:17697: read: connection reset by peer" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.318333 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.318421 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.318856 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.318899 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386463 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386551 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386605 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386646 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386680 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386757 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386797 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386838 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386880 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386899 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.386914 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387001 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387032 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387065 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387090 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387118 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387139 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387160 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387182 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387208 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387229 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387250 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387269 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387291 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387312 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387334 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387353 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387374 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387400 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387423 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387449 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387474 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387498 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387523 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387547 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387570 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387594 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387616 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387638 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387659 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387680 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387735 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387755 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387777 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387799 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387821 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387841 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387862 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387883 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387903 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387907 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387925 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387961 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.387982 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388002 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388023 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388042 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388080 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388103 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388124 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388150 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388155 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388171 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388255 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388295 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388268 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388338 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388380 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388392 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388431 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388465 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388505 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388542 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388582 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388587 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388623 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388662 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388731 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388766 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388802 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388845 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388880 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388916 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388957 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388996 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389037 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389076 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389111 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389145 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389184 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389221 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389258 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389298 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389336 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389371 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389408 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391108 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391154 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391190 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391426 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391472 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391537 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391595 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391633 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391673 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391756 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391814 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391850 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391886 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391943 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391983 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392022 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392105 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392146 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392183 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392254 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392289 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392324 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392359 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392396 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392435 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392471 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392528 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392565 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392600 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392636 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392673 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392808 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392851 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392888 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396106 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396148 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396179 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396226 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396252 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396279 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396306 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396330 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396355 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396562 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396805 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396830 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396854 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396879 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396901 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398459 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398503 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398545 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398585 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398608 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398631 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398655 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.401797 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.401848 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.401883 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.401921 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.401945 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.401973 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402007 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402090 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402132 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402166 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402203 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402240 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402277 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402308 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402339 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402372 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402411 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402446 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402482 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402521 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402554 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402592 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402628 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402659 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402812 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402846 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402891 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402916 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402944 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402967 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402990 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403015 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403038 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403060 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403087 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403112 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403134 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403157 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403179 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403203 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403229 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403252 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403295 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403330 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403379 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403505 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403559 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403584 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403607 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403668 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403840 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403874 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403917 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403948 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403977 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.404012 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.404050 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.404112 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.388952 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389113 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389208 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389232 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389440 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389469 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389572 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389727 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389723 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389825 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.389945 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.390197 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.390520 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.390929 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391336 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391559 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391841 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391949 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392029 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392256 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392403 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392774 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392854 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.392927 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.391420 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.393103 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.393176 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.393479 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.393894 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.393822 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394063 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394263 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394289 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394360 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394391 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394480 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394502 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394909 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.404575 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.394918 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.395174 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.395303 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.395443 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396009 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396088 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396157 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.396668 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.397301 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.397553 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.397578 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.397604 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.397919 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.397989 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398025 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398246 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.398998 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.399545 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.399727 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.399823 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.399883 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.400004 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.400227 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.400255 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.400348 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.400887 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.401735 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.401791 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.402552 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403195 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.403890 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.404198 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:00:49.904130437 +0000 UTC m=+20.486821891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.404925 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.404986 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405022 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405059 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405073 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405090 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405181 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405199 4872 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405360 4872 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405378 4872 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405402 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405431 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.405450 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.406125 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.406431 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.407109 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.407187 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.407343 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.407620 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.407646 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.408154 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.408201 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.408481 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.408562 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.408841 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.408908 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.409128 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.409323 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.409335 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.409614 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.409668 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.409847 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.410112 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.410245 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.410296 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.412029 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.410552 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.404475 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.404767 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.410593 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.410428 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.411140 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.411653 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.412288 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.411852 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.411988 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.412320 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.412237 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.412397 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.412637 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.412658 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.411676 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.413632 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.413807 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.413801 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.413889 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:49.913865961 +0000 UTC m=+20.496557615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.413903 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.404207 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.413985 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:49.913975054 +0000 UTC m=+20.496666728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.416534 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.417515 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.417884 4872 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.418476 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.419022 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.419136 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.419207 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.419267 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.419882 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.420980 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.421863 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.421892 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.421926 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.422008 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.422376 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.422720 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.422834 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.422865 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.422874 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.423037 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.423649 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.424270 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.424523 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.425450 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.426395 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.426529 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.427227 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.427262 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.427453 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.428274 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.430828 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.431683 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.433415 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.433982 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.435894 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.436320 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.437213 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.437914 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.437958 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.437480 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.438582 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.439709 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.440264 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.447139 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.447734 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.448853 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.451192 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.451770 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.452846 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.452977 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.453050 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.453114 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.453143 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.453164 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.453478 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.453929 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.453948 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.453970 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.453995 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.454077 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:49.954053236 +0000 UTC m=+20.536744670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.453932 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.454113 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.454135 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.454149 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.454206 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:49.95418677 +0000 UTC m=+20.536878194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.454368 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.454438 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.454799 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.456543 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.458756 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.458754 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.459012 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.459046 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.459164 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.459885 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.462210 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.462618 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.462619 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.463339 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.464364 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.464624 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.464966 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.465854 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.465966 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.466087 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.466942 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.466969 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.468234 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.482823 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.486745 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.489201 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.490503 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506710 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506781 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506867 4872 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506880 4872 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506887 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506891 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506919 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506930 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506942 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506951 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506961 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.506989 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507001 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507011 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507025 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507036 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507046 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507074 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507083 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507092 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507102 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507111 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507121 4872 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507145 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507154 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507164 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507173 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507182 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507191 4872 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507201 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507225 4872 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507235 4872 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507246 4872 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507236 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507256 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507345 4872 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507364 4872 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507381 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507397 4872 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507419 4872 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507437 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507455 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507474 4872 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507491 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507508 4872 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507523 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507539 4872 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507559 4872 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507578 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507594 4872 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507611 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507627 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507644 4872 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507680 4872 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507732 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507748 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507766 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507784 4872 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507800 4872 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507816 4872 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507833 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507850 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507867 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507883 4872 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507899 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507917 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507936 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507952 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507968 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.507985 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508000 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508045 4872 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508061 4872 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508078 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508093 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508109 4872 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508125 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508140 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508156 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508172 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508187 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508203 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508219 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508235 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508253 4872 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508269 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508284 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508300 4872 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508317 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508333 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508349 4872 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508368 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508384 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508400 4872 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508417 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508434 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508449 4872 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508465 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508480 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508496 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508512 4872 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508528 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508543 4872 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508559 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508574 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508590 4872 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508606 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508626 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508644 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508660 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508675 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508728 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508747 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508764 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508784 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508801 4872 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508820 4872 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508839 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508854 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508870 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508885 4872 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508900 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508915 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508936 4872 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508953 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508971 4872 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.508988 4872 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509003 4872 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509017 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509033 4872 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509048 4872 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509067 4872 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509082 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509097 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509112 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509128 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509144 4872 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509161 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509176 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509193 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509210 4872 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509226 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509242 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509262 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509278 4872 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509295 4872 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509314 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509332 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509348 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509364 4872 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509380 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509396 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509412 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509428 4872 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509444 4872 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509460 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509476 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509492 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509508 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509525 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509543 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509560 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509576 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509594 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509611 4872 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509626 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509642 4872 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509658 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509676 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509716 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509732 4872 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509747 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509762 4872 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509778 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509793 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509807 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509822 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509837 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509851 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509866 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509881 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509895 4872 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509910 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509924 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509939 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509954 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509971 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.509986 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.510004 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.510019 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.510034 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.668767 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.683424 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.696948 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 06:00:49 crc kubenswrapper[4872]: I0203 06:00:49.913148 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:49 crc kubenswrapper[4872]: E0203 06:00:49.913364 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:00:50.913329996 +0000 UTC m=+21.496021410 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.013786 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.013857 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.013891 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.013923 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014060 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014133 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:51.014113167 +0000 UTC m=+21.596804601 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014252 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014303 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014320 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014409 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:51.014381183 +0000 UTC m=+21.597072597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014467 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014496 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:51.014488706 +0000 UTC m=+21.597180120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.014809 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.015138 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.015167 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.015222 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:51.015205162 +0000 UTC m=+21.597896816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.048451 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:12:41.725968083 +0000 UTC Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.128152 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.129384 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.132317 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.133889 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.136154 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.137398 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.138673 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.141406 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.143204 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.144982 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.146221 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.150497 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.151988 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.153639 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.161140 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.162649 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.165519 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.166565 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.167254 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.168191 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.171003 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.172286 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.174888 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.176094 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.178940 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.180097 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.181771 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.184452 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.185152 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.186518 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.187167 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.187737 4872 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.187929 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.189264 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.190949 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.191433 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.193038 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.194279 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.195117 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.197729 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.198335 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.199276 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.200293 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.201044 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.202073 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.202767 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.203784 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.204369 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.209045 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.209989 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.210936 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.211534 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.212531 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.213199 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.213853 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.214970 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.229412 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.256599 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.257505 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef"} Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.257540 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"42e2164ae8ed988ff7c4f9473ee3daa97ca1452e2b0cc53cd937499f7ab128d7"} Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.258910 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0a434ec5f025d629730dea0fa6506f5daf0dcdd625547b69232a88fcf329cec5"} Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.260911 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3"} Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.260934 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100"} Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.260946 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"02c228f7cee1c4b33c2701d8a3ab433f84128e9242b1467dee81dcdebd71a5f4"} Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.262529 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.264285 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077" exitCode=255 Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.264324 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077"} Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.273948 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.281852 4872 scope.go:117] "RemoveContainer" containerID="4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.285228 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.292803 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.315988 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.336310 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.350656 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.364724 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.376962 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.392513 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.402480 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 06:00:50 crc kubenswrapper[4872]: I0203 06:00:50.920521 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:50 crc kubenswrapper[4872]: E0203 06:00:50.920717 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:00:52.920670458 +0000 UTC m=+23.503361872 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.021832 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.021871 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.021891 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.021914 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022020 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022035 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022051 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022097 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022093 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022215 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:53.022187617 +0000 UTC m=+23.604879061 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022330 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022111 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022497 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:53.022459073 +0000 UTC m=+23.605150517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022581 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:53.022534085 +0000 UTC m=+23.605225499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022070 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.022631 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:53.022606336 +0000 UTC m=+23.605297750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.049103 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:34:46.088327238 +0000 UTC Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.122880 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.122977 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.122897 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.123153 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.123367 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:00:51 crc kubenswrapper[4872]: E0203 06:00:51.123439 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.268861 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.271182 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9"} Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.271608 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.290486 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.321788 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.344615 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.362937 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.377643 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.391651 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:51 crc kubenswrapper[4872]: I0203 06:00:51.406185 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:52 crc kubenswrapper[4872]: I0203 06:00:52.050204 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:16:01.702343634 +0000 UTC Feb 03 06:00:52 crc kubenswrapper[4872]: I0203 06:00:52.939547 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:52 crc kubenswrapper[4872]: E0203 06:00:52.939837 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:00:56.939800559 +0000 UTC m=+27.522492003 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.040631 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.040729 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.040772 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.040815 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.040922 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.040977 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.040998 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041021 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041043 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041101 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:57.041075591 +0000 UTC m=+27.623767035 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041146 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041174 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:57.041157794 +0000 UTC m=+27.623849238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041002 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041198 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041236 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:57.041186335 +0000 UTC m=+27.623877789 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.041264 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 06:00:57.041251136 +0000 UTC m=+27.623942590 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.051062 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:22:24.724550288 +0000 UTC Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.121943 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.122030 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.122071 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.122122 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.122255 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:00:53 crc kubenswrapper[4872]: E0203 06:00:53.122378 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.279636 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8"} Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.303210 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:53Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.326245 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:53Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.348835 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:53Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.374475 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:53Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.393972 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:53Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.411053 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:53Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:53 crc kubenswrapper[4872]: I0203 06:00:53.429195 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:53Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:54 crc kubenswrapper[4872]: I0203 06:00:54.052009 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:56:24.044279755 +0000 UTC Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.052770 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:33:39.491367747 +0000 UTC Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.065941 4872 csr.go:261] certificate signing request csr-wjnxq is approved, waiting to be issued Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.081042 4872 csr.go:257] certificate signing request csr-wjnxq is issued Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.122457 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.122515 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.123032 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.123127 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.123378 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.123445 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.576540 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-898rp"] Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.576956 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-898rp" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.581541 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.581750 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.581867 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.598368 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.616421 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.632348 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.646888 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.666299 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrkp\" (UniqueName: \"kubernetes.io/projected/b40f2e41-c3e3-4cfe-bf45-9d90649366d6-kube-api-access-khrkp\") pod \"node-resolver-898rp\" (UID: \"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\") " pod="openshift-dns/node-resolver-898rp" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.666336 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b40f2e41-c3e3-4cfe-bf45-9d90649366d6-hosts-file\") pod \"node-resolver-898rp\" (UID: \"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\") " pod="openshift-dns/node-resolver-898rp" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.674574 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.684478 4872 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.686175 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.686230 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.686240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.686305 4872 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.691273 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.692873 4872 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.693091 4872 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.694158 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.694183 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.694192 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.694205 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.694217 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:55Z","lastTransitionTime":"2026-02-03T06:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.702672 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.720255 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.723507 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.727866 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.727906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.727921 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.727943 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.727958 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:55Z","lastTransitionTime":"2026-02-03T06:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.744478 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.749608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.749657 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.749672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.749694 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.749727 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:55Z","lastTransitionTime":"2026-02-03T06:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.767474 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khrkp\" (UniqueName: \"kubernetes.io/projected/b40f2e41-c3e3-4cfe-bf45-9d90649366d6-kube-api-access-khrkp\") pod \"node-resolver-898rp\" (UID: \"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\") " pod="openshift-dns/node-resolver-898rp" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.767549 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b40f2e41-c3e3-4cfe-bf45-9d90649366d6-hosts-file\") pod \"node-resolver-898rp\" (UID: \"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\") " pod="openshift-dns/node-resolver-898rp" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.767666 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b40f2e41-c3e3-4cfe-bf45-9d90649366d6-hosts-file\") pod \"node-resolver-898rp\" (UID: \"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\") " pod="openshift-dns/node-resolver-898rp" Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.785137 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.795759 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.795800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.795815 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.795835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.795849 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:55Z","lastTransitionTime":"2026-02-03T06:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.798048 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khrkp\" (UniqueName: \"kubernetes.io/projected/b40f2e41-c3e3-4cfe-bf45-9d90649366d6-kube-api-access-khrkp\") pod \"node-resolver-898rp\" (UID: \"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\") " pod="openshift-dns/node-resolver-898rp" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.812086 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.818205 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.822369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.822394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.822404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.822436 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.822448 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:55Z","lastTransitionTime":"2026-02-03T06:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.825105 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.842121 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.847369 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: E0203 06:00:55.847529 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.849499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.849542 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.849553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.849568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.849577 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:55Z","lastTransitionTime":"2026-02-03T06:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.889153 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-898rp" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.901040 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.920666 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.933816 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.953321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.954214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.954734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.955106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.955556 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:55Z","lastTransitionTime":"2026-02-03T06:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.961494 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:55Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.998563 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hgxbn"] Feb 03 06:00:55 crc kubenswrapper[4872]: I0203 06:00:55.998927 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9pfgq"] Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:55.999422 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-g2f65"] Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:55.999645 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.000010 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.000360 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.002920 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.024485 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.024559 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.024878 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.024949 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.024955 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.025065 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.025629 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.025913 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.026135 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.026180 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.026896 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.027562 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.052969 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:03:35.146924564 +0000 UTC Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.058138 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.058169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.058180 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.058195 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.058206 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.069846 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/05d05db4-da7f-4f2f-9025-672aefab2d16-rootfs\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.069898 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-hostroot\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.069914 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05d05db4-da7f-4f2f-9025-672aefab2d16-proxy-tls\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.070191 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-cnibin\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.070215 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-conf-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.070231 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05d05db4-da7f-4f2f-9025-672aefab2d16-mcd-auth-proxy-config\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.070642 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.070680 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-socket-dir-parent\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.070847 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-cni-multus\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071060 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db59aed5-04bc-4793-8938-196aace29feb-multus-daemon-config\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071093 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071108 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db59aed5-04bc-4793-8938-196aace29feb-cni-binary-copy\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071245 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-kubelet\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071262 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w5qr\" (UniqueName: \"kubernetes.io/projected/05d05db4-da7f-4f2f-9025-672aefab2d16-kube-api-access-7w5qr\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071421 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cnibin\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071444 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-os-release\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071473 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-k8s-cni-cncf-io\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071523 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cni-binary-copy\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071599 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-os-release\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071618 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nd6x\" (UniqueName: \"kubernetes.io/projected/db59aed5-04bc-4793-8938-196aace29feb-kube-api-access-2nd6x\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071710 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-cni-bin\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071728 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-system-cni-dir\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.071749 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w97s\" (UniqueName: \"kubernetes.io/projected/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-kube-api-access-6w97s\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.072462 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-system-cni-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.072495 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-cni-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.072511 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-netns\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.072527 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-multus-certs\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.072549 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-etc-kubernetes\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.073541 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.082815 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-03 05:55:55 +0000 UTC, rotation deadline is 2026-12-16 20:54:53.376209058 +0000 UTC Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.082901 4872 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7598h53m57.293311285s for next certificate rotation Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.096227 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.110990 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.125920 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.137030 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.150029 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.160056 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.160090 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.160103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.160121 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.160136 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.162029 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.173381 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-os-release\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.173523 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-k8s-cni-cncf-io\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.173610 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-k8s-cni-cncf-io\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.173724 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w5qr\" (UniqueName: \"kubernetes.io/projected/05d05db4-da7f-4f2f-9025-672aefab2d16-kube-api-access-7w5qr\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.173814 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cnibin\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.173893 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-os-release\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.173975 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-os-release\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.173923 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cnibin\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174092 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nd6x\" (UniqueName: \"kubernetes.io/projected/db59aed5-04bc-4793-8938-196aace29feb-kube-api-access-2nd6x\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174174 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cni-binary-copy\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174255 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-cni-bin\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174334 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-system-cni-dir\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174417 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w97s\" (UniqueName: \"kubernetes.io/projected/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-kube-api-access-6w97s\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174488 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-system-cni-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174580 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-cni-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174694 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-netns\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174784 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-multus-certs\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174880 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cni-binary-copy\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174892 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-etc-kubernetes\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.174983 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/05d05db4-da7f-4f2f-9025-672aefab2d16-rootfs\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175006 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-hostroot\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175034 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-cnibin\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175062 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05d05db4-da7f-4f2f-9025-672aefab2d16-proxy-tls\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175094 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-conf-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175126 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05d05db4-da7f-4f2f-9025-672aefab2d16-mcd-auth-proxy-config\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175169 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175199 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-cni-multus\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175227 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db59aed5-04bc-4793-8938-196aace29feb-multus-daemon-config\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175267 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-socket-dir-parent\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175309 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175348 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db59aed5-04bc-4793-8938-196aace29feb-cni-binary-copy\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175373 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-kubelet\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175444 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-kubelet\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175480 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/05d05db4-da7f-4f2f-9025-672aefab2d16-rootfs\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175505 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-hostroot\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175536 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-cnibin\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175632 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-etc-kubernetes\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175776 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-cni-bin\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175877 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-system-cni-dir\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.175954 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-system-cni-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176030 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-var-lib-cni-multus\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176031 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-netns\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176071 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-conf-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176072 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-socket-dir-parent\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176110 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-host-run-multus-certs\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176374 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db59aed5-04bc-4793-8938-196aace29feb-multus-cni-dir\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176697 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/db59aed5-04bc-4793-8938-196aace29feb-multus-daemon-config\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176725 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176785 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/05d05db4-da7f-4f2f-9025-672aefab2d16-mcd-auth-proxy-config\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176798 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db59aed5-04bc-4793-8938-196aace29feb-cni-binary-copy\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.176858 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.177048 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.177216 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-os-release\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.180629 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/05d05db4-da7f-4f2f-9025-672aefab2d16-proxy-tls\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.189759 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.192241 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nd6x\" (UniqueName: \"kubernetes.io/projected/db59aed5-04bc-4793-8938-196aace29feb-kube-api-access-2nd6x\") pod \"multus-g2f65\" (UID: \"db59aed5-04bc-4793-8938-196aace29feb\") " pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.194149 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w97s\" (UniqueName: \"kubernetes.io/projected/4e75c1ed-cb56-4fe0-a8db-fd1400cb935f-kube-api-access-6w97s\") pod \"multus-additional-cni-plugins-9pfgq\" (UID: \"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\") " pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.200285 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w5qr\" (UniqueName: \"kubernetes.io/projected/05d05db4-da7f-4f2f-9025-672aefab2d16-kube-api-access-7w5qr\") pod \"machine-config-daemon-hgxbn\" (UID: \"05d05db4-da7f-4f2f-9025-672aefab2d16\") " pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.202567 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.217357 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.231162 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.250223 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.262052 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.262089 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.262098 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.262115 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.262126 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.275838 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.289971 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-898rp" event={"ID":"b40f2e41-c3e3-4cfe-bf45-9d90649366d6","Type":"ContainerStarted","Data":"d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.290050 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-898rp" event={"ID":"b40f2e41-c3e3-4cfe-bf45-9d90649366d6","Type":"ContainerStarted","Data":"3304b3f2e7c0c40f6ed478cb6dc5b152ee9a12787af8b6ac52d8c454ddc348dc"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.299047 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.313277 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.327068 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.336619 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2f65" Feb 03 06:00:56 crc kubenswrapper[4872]: W0203 06:00:56.347564 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb59aed5_04bc_4793_8938_196aace29feb.slice/crio-eea8d392cbf20f3cd7528d6af73f4d48847d41477aac9a91f82bf15d8d8e31bf WatchSource:0}: Error finding container eea8d392cbf20f3cd7528d6af73f4d48847d41477aac9a91f82bf15d8d8e31bf: Status 404 returned error can't find the container with id eea8d392cbf20f3cd7528d6af73f4d48847d41477aac9a91f82bf15d8d8e31bf Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.349774 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.351873 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.358617 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.363807 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.363844 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.363855 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.363872 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.363888 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.375702 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.391472 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.404939 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jbpgt"] Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.406020 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.408151 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.409801 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.409836 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.409939 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.409994 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.410032 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.410131 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.411941 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.421193 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.439730 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524470 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-kubelet\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524544 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-systemd-units\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524593 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-systemd\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524610 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-config\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524629 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524664 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-log-socket\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524721 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-slash\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524739 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-ovn\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524812 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovn-node-metrics-cert\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524842 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-script-lib\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524859 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs9hh\" (UniqueName: \"kubernetes.io/projected/dafd73bb-7642-409c-9ea2-f6dbc002067f-kube-api-access-bs9hh\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524901 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-node-log\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524915 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-bin\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.524932 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-env-overrides\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.525002 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-netns\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.525042 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-var-lib-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.525062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-netd\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.525091 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-ovn-kubernetes\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.525131 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.525233 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-etc-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.530568 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.536013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.536046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.536055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.536072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.536080 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.547892 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.561007 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.573971 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.589790 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.602009 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.614349 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625632 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-etc-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625673 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-kubelet\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625696 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-config\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625728 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-systemd-units\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625744 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-systemd\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625759 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-log-socket\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625761 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-etc-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625777 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625795 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-ovn\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625799 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-kubelet\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625814 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-systemd\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625809 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovn-node-metrics-cert\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625843 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-systemd-units\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625869 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-slash\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625884 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625895 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-script-lib\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625910 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-log-socket\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625913 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs9hh\" (UniqueName: \"kubernetes.io/projected/dafd73bb-7642-409c-9ea2-f6dbc002067f-kube-api-access-bs9hh\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625971 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-bin\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.625988 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-env-overrides\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626007 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-node-log\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626044 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-netns\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626066 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-var-lib-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626085 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-netd\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626102 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626125 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-ovn-kubernetes\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626139 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-slash\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626177 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-netns\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626208 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-bin\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626389 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-config\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626432 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-ovn\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626458 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-node-log\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626492 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-netd\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626513 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-var-lib-openvswitch\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626536 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626559 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-ovn-kubernetes\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626593 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-env-overrides\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.626691 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-script-lib\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.630162 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovn-node-metrics-cert\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.634363 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.638523 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.638548 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.638560 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.638575 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.638585 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.645830 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs9hh\" (UniqueName: \"kubernetes.io/projected/dafd73bb-7642-409c-9ea2-f6dbc002067f-kube-api-access-bs9hh\") pod \"ovnkube-node-jbpgt\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.654684 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.664902 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.675465 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.694024 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.704803 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.714073 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.723011 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.731066 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.744813 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.748540 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.748575 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.748585 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.748599 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.748617 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.755081 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:56Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.852853 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.853145 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.853155 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.853170 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.853181 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.956134 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.956173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.956186 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.956202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:56 crc kubenswrapper[4872]: I0203 06:00:56.956215 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:56Z","lastTransitionTime":"2026-02-03T06:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.029022 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.029253 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:01:05.02922493 +0000 UTC m=+35.611916344 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.053116 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:28:27.898291355 +0000 UTC Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.059245 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.059293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.059305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.059325 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.059338 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.122656 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.122800 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.122862 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.123023 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.123181 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.123288 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.130359 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.130400 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.130424 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.130448 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130524 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130580 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:05.130560014 +0000 UTC m=+35.713251428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130611 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130625 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130746 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130786 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130807 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130817 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:05.130791359 +0000 UTC m=+35.713482783 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130851 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130865 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130870 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:05.13085031 +0000 UTC m=+35.713541764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:57 crc kubenswrapper[4872]: E0203 06:00:57.130896 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:05.130887661 +0000 UTC m=+35.713579085 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.161640 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.161740 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.161755 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.161779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.161794 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.264328 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.264384 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.264400 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.264425 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.264441 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.295073 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.295123 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.295137 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"f9919b93d10e384a219e664228ec71fca7a00a29236d7eccaebd218853384b6d"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.297363 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda" exitCode=0 Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.297436 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.297488 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"29fa1815ceb949fd26f698e03328b8ccc01cecd9374a91eaf6daecb13e66f52e"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.299529 4872 generic.go:334] "Generic (PLEG): container finished" podID="4e75c1ed-cb56-4fe0-a8db-fd1400cb935f" containerID="57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2" exitCode=0 Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.299595 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerDied","Data":"57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.299630 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerStarted","Data":"f932646bb219f36751a3f2c7b2a71501875bc84cd86b24d52ea7ccf9a2173284"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.301873 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2f65" event={"ID":"db59aed5-04bc-4793-8938-196aace29feb","Type":"ContainerStarted","Data":"d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.301898 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2f65" event={"ID":"db59aed5-04bc-4793-8938-196aace29feb","Type":"ContainerStarted","Data":"eea8d392cbf20f3cd7528d6af73f4d48847d41477aac9a91f82bf15d8d8e31bf"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.324728 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.343091 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.368036 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.368080 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.368094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.368114 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.368127 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.370780 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.390188 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.406648 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.422686 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.438558 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.460780 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.472730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.473719 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.473738 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.473761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.474017 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.483807 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.498326 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.510063 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.526385 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.539311 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.561891 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.576330 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.576958 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.577008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.577019 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.577045 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.577059 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.587779 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.597959 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.612197 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.623943 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.635967 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.646293 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.656391 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.672111 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.679153 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.679179 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.679187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.679199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.679208 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.740383 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.757232 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.779187 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.782014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.782132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.782211 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.782300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.782377 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.885459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.885648 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.885785 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.885914 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.886028 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.925656 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pwxt9"] Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.926057 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.928189 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.929431 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.930080 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.930595 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.937360 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e5db98db-c011-40be-b541-4a6552618133-serviceca\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.937575 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkh9k\" (UniqueName: \"kubernetes.io/projected/e5db98db-c011-40be-b541-4a6552618133-kube-api-access-tkh9k\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.937899 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5db98db-c011-40be-b541-4a6552618133-host\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.946428 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.960029 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.973514 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.989248 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.989285 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.989301 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.989323 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.989341 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:57Z","lastTransitionTime":"2026-02-03T06:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:57 crc kubenswrapper[4872]: I0203 06:00:57.990263 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:57Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.011257 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.025539 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.038882 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e5db98db-c011-40be-b541-4a6552618133-serviceca\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.038934 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkh9k\" (UniqueName: \"kubernetes.io/projected/e5db98db-c011-40be-b541-4a6552618133-kube-api-access-tkh9k\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.038988 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5db98db-c011-40be-b541-4a6552618133-host\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.039056 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e5db98db-c011-40be-b541-4a6552618133-host\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.040389 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e5db98db-c011-40be-b541-4a6552618133-serviceca\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.046523 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.053821 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:22:12.113283553 +0000 UTC Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.065036 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.068410 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkh9k\" (UniqueName: \"kubernetes.io/projected/e5db98db-c011-40be-b541-4a6552618133-kube-api-access-tkh9k\") pod \"node-ca-pwxt9\" (UID: \"e5db98db-c011-40be-b541-4a6552618133\") " pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.093115 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.096011 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.096045 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.096053 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.096069 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.096080 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.107553 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.128666 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.159563 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.178845 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.191971 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.198726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.198749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.198757 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.198770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.198779 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.300499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.300535 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.300547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.300569 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.300582 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.310562 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pwxt9" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.327981 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.328028 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.328041 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.328054 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.328066 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.328078 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.329591 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerStarted","Data":"15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.342632 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.366234 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.381671 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.395193 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.405617 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.405655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.405669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.405691 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.405727 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.408561 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.423111 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.445972 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.468342 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.482905 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.496719 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.508218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.508250 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.508263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.508280 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.508294 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.515438 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.541806 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.559065 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.569573 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.611071 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.611121 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.611132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.611153 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.611654 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.715293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.715339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.715355 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.715375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.715389 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.819096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.819146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.819158 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.819176 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.819188 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.922320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.922353 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.922362 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.922377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:58 crc kubenswrapper[4872]: I0203 06:00:58.922387 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:58Z","lastTransitionTime":"2026-02-03T06:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.025901 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.025962 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.025979 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.026004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.026051 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.054736 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:48:19.779396374 +0000 UTC Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.122205 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.122246 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.122228 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:00:59 crc kubenswrapper[4872]: E0203 06:00:59.122448 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:00:59 crc kubenswrapper[4872]: E0203 06:00:59.122570 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:00:59 crc kubenswrapper[4872]: E0203 06:00:59.122767 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.129532 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.129589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.129602 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.129625 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.129643 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.232202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.232269 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.232286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.232315 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.232333 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.334949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.335003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.335021 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.335045 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.335064 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.337756 4872 generic.go:334] "Generic (PLEG): container finished" podID="4e75c1ed-cb56-4fe0-a8db-fd1400cb935f" containerID="15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47" exitCode=0 Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.337859 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerDied","Data":"15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.340985 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pwxt9" event={"ID":"e5db98db-c011-40be-b541-4a6552618133","Type":"ContainerStarted","Data":"8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.341041 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pwxt9" event={"ID":"e5db98db-c011-40be-b541-4a6552618133","Type":"ContainerStarted","Data":"1973382813db22c8027eb001b78086f53117705bd7c2321b817f1ae84a323945"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.359670 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.384492 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.408743 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.428399 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.437640 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.437905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.438023 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.438115 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.438209 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.451493 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.466846 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.483993 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.501581 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.527312 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.542489 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.542552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.542564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.542582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.542597 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.547660 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.562243 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.576611 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.598796 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.612616 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.645461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.645526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.645537 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.645560 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.645572 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.650427 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.679990 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.713186 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.731236 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.743299 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.748551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.748583 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.748594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.748612 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.748622 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.756266 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.771330 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.782806 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.802564 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.819596 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.841095 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.851390 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.851450 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.851468 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.851525 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.851544 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.857230 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.877473 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:00:59Z is after 2025-08-24T17:21:41Z" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.886039 4872 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.888042 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-898rp/status\": read tcp 38.102.83.246:51884->38.102.83.246:6443: use of closed network connection" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.954610 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.954673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.954708 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.954731 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:00:59 crc kubenswrapper[4872]: I0203 06:00:59.954748 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:00:59Z","lastTransitionTime":"2026-02-03T06:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.055167 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 13:29:28.884957626 +0000 UTC Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.056993 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.057065 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.057084 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.057112 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.057131 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.148617 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.159479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.159535 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.159553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.159577 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.159595 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.166307 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.179158 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.199576 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.226602 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.247229 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.262361 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.262412 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.262432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.262460 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.262480 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.267523 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.293204 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.320396 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.351623 4872 generic.go:334] "Generic (PLEG): container finished" podID="4e75c1ed-cb56-4fe0-a8db-fd1400cb935f" containerID="f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a" exitCode=0 Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.351958 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerDied","Data":"f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.368177 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.376012 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.376255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.376981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.377103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.377203 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.384649 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.419494 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.439729 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.476570 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.479553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.479635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.479673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.479716 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.479774 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.492579 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.507675 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.522500 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.543888 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.561080 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.583039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.583091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.583104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.583125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.583140 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.589411 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.606573 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.620198 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.629487 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.649199 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.686789 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.686823 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.686834 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.686852 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.686868 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.691675 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.731741 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.771349 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.789245 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.789295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.789316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.789342 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.789363 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.821078 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.892048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.892083 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.892091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.892106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.892117 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.995032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.995086 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.995105 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.995128 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:00 crc kubenswrapper[4872]: I0203 06:01:00.995147 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:00Z","lastTransitionTime":"2026-02-03T06:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.056359 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:00:05.049800519 +0000 UTC Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.098539 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.098608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.098626 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.098653 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.098675 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.122227 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.122297 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.122295 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:01 crc kubenswrapper[4872]: E0203 06:01:01.122416 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:01 crc kubenswrapper[4872]: E0203 06:01:01.122519 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:01 crc kubenswrapper[4872]: E0203 06:01:01.122619 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.202156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.202222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.202245 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.202276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.202294 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.305618 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.305671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.305701 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.305717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.305728 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.364612 4872 generic.go:334] "Generic (PLEG): container finished" podID="4e75c1ed-cb56-4fe0-a8db-fd1400cb935f" containerID="0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e" exitCode=0 Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.364714 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerDied","Data":"0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.376768 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.385416 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.405701 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.410819 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.410872 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.410886 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.410905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.410919 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.423407 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.438001 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.453936 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.468442 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.482301 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.498965 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.513528 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.513567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.513583 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.513603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.513617 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.520589 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.559886 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.572219 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.590242 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.606486 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.616163 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.616189 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.616199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.616214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.616225 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.620642 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:01Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.718906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.718967 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.718983 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.719006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.719022 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.821750 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.821814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.821832 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.821856 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.821875 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.924926 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.924975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.924990 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.925013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:01 crc kubenswrapper[4872]: I0203 06:01:01.925029 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:01Z","lastTransitionTime":"2026-02-03T06:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.027748 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.027814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.027826 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.027842 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.027853 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.057277 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:56:26.897993361 +0000 UTC Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.131260 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.131304 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.131316 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.131333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.131348 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.234996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.235057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.235080 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.235106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.235125 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.338324 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.338382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.338395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.338417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.338437 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.384771 4872 generic.go:334] "Generic (PLEG): container finished" podID="4e75c1ed-cb56-4fe0-a8db-fd1400cb935f" containerID="db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088" exitCode=0 Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.384858 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerDied","Data":"db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.409801 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.432593 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.441812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.441849 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.441864 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.441883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.441897 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.453387 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.472893 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.492973 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.515853 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.537271 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.546171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.546226 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.546240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.546259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.546271 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.559653 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.571460 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.581952 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.594833 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.609177 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.621876 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.635041 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:02Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.648209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.648241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.648250 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.648264 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.648275 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.750449 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.750506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.750516 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.750530 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.750540 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.853128 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.853167 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.853179 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.853194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.853204 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.955830 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.955919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.955937 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.955962 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:02 crc kubenswrapper[4872]: I0203 06:01:02.955979 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:02Z","lastTransitionTime":"2026-02-03T06:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.057450 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:14:41.973501353 +0000 UTC Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.059552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.059573 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.059581 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.059593 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.059601 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.123294 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.123355 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.123399 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:03 crc kubenswrapper[4872]: E0203 06:01:03.124512 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:03 crc kubenswrapper[4872]: E0203 06:01:03.124179 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:03 crc kubenswrapper[4872]: E0203 06:01:03.123542 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.163422 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.163499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.163511 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.163559 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.163579 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.266293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.266325 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.266339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.266357 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.266368 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.372219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.372266 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.372282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.372303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.372320 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.395332 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.395763 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.395824 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.405740 4872 generic.go:334] "Generic (PLEG): container finished" podID="4e75c1ed-cb56-4fe0-a8db-fd1400cb935f" containerID="b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6" exitCode=0 Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.405813 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerDied","Data":"b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.422234 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.446129 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.449546 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.459369 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.477016 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.477061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.477081 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.477106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.477125 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.481923 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.503380 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.527671 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.553535 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.573552 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.580557 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.580665 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.580710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.580740 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.580763 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.588018 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.603113 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.622246 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.640352 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.663759 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.683816 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.684891 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.684939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.684952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.684971 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.684985 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.703633 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.728851 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.742428 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.763102 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.779351 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.788079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.788146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.788177 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.788204 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.788222 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.801337 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.819125 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.837001 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.857525 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.891198 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.891393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.891574 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.891588 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.891613 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.891627 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.909986 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.925672 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.938882 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.960128 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.971964 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:03Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.993938 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.994000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.994018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.994046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:03 crc kubenswrapper[4872]: I0203 06:01:03.994066 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:03Z","lastTransitionTime":"2026-02-03T06:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.058243 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:33:42.981544666 +0000 UTC Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.096919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.097005 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.097027 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.097061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.097084 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.200005 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.200076 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.200097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.200126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.200145 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.303339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.303463 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.303486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.303515 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.303535 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.406484 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.406650 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.406679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.406772 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.406833 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.417795 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" event={"ID":"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f","Type":"ContainerStarted","Data":"8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.417953 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.442029 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.462372 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.483665 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.510459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.510521 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.510531 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.510549 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.510560 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.514028 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.536058 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.559263 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.581381 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.605863 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.614591 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.614651 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.614668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.614722 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.614742 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.618528 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.632814 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.687062 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.717432 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.717769 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.717787 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.717795 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.717812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.717823 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.736206 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.749812 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:04Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.820442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.820480 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.820491 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.820511 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.820523 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.923213 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.923259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.923272 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.923295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:04 crc kubenswrapper[4872]: I0203 06:01:04.923309 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:04Z","lastTransitionTime":"2026-02-03T06:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.024798 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.024837 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.024846 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.024859 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.024867 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.059115 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:25:31.129649455 +0000 UTC Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.121803 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.121911 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.122248 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.122293 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.122325 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.122364 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.123663 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.123765 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:01:21.123752327 +0000 UTC m=+51.706443731 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.127185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.127209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.127217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.127231 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.127239 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.225335 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.225376 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.225397 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.225424 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225479 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225537 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225552 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225561 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225571 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225593 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225603 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225631 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225603 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:21.225575172 +0000 UTC m=+51.808266626 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225754 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:21.225706225 +0000 UTC m=+51.808397639 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225780 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:21.225769836 +0000 UTC m=+51.808461250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:01:05 crc kubenswrapper[4872]: E0203 06:01:05.225798 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:21.225792237 +0000 UTC m=+51.808483651 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.229520 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.229540 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.229548 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.229561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.229588 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.332314 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.332411 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.332432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.332455 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.332472 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.421639 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.435134 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.435211 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.435236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.435267 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.435290 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.538754 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.538810 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.538824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.538846 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.538862 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.641957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.642013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.642031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.642062 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.642078 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.744069 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.744097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.744107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.744120 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.744129 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.846114 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.846140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.846147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.846160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.846169 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.949463 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.949561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.949587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.949620 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:05 crc kubenswrapper[4872]: I0203 06:01:05.949641 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:05Z","lastTransitionTime":"2026-02-03T06:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.053189 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.053249 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.053270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.053300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.053322 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.059742 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:34:20.918239512 +0000 UTC Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.156542 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.156603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.156624 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.156652 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.156674 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.208391 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.208450 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.208469 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.208494 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.208514 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: E0203 06:01:06.229586 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.239770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.239832 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.239861 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.239892 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.239926 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: E0203 06:01:06.262773 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.267611 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.267774 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.267794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.267819 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.267838 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: E0203 06:01:06.289233 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.294377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.294432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.294450 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.294472 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.294488 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: E0203 06:01:06.314682 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.319516 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.319601 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.319655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.319683 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.319722 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: E0203 06:01:06.339836 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: E0203 06:01:06.340135 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.342191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.342243 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.342262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.342288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.342338 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.429200 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/0.log" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.433922 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690" exitCode=1 Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.433981 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.435198 4872 scope.go:117] "RemoveContainer" containerID="c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.450944 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.451009 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.451031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.451061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.451086 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.457863 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.480071 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.505125 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.527872 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.554312 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.554659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.554717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.554732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.554754 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.554769 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.605966 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.641083 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:05Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 06:01:05.685655 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:05.685759 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:05.685771 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:05.685842 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 06:01:05.685851 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 06:01:05.685874 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:05.685946 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:05.685987 6125 factory.go:656] Stopping watch factory\\\\nI0203 06:01:05.686019 6125 ovnkube.go:599] Stopped ovnkube\\\\nI0203 06:01:05.686068 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:05.686092 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:05.686116 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 06:01:05.686131 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:05.686145 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:05.686171 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.657675 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.657768 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.657786 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.657811 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.657828 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.672555 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.695228 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.717003 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.737254 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.764732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.764780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.764797 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.764824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.764844 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.767774 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.787856 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.804588 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:06Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.867505 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.867564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.867583 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.867609 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.867630 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.970609 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.970667 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.970683 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.970742 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:06 crc kubenswrapper[4872]: I0203 06:01:06.970761 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:06Z","lastTransitionTime":"2026-02-03T06:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.060286 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:41:13.049302643 +0000 UTC Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.073970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.074026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.074043 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.074073 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.074090 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.121884 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.121932 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.121884 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:07 crc kubenswrapper[4872]: E0203 06:01:07.122050 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:07 crc kubenswrapper[4872]: E0203 06:01:07.122170 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:07 crc kubenswrapper[4872]: E0203 06:01:07.122280 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.177153 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.177208 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.177227 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.177255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.177291 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.280553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.281337 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.281359 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.281396 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.281418 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.383168 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.383243 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.383257 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.383274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.383285 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.439099 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/0.log" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.442172 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.442322 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.462072 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.473715 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.485240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.485341 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.485360 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.485380 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.485394 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.487727 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.502669 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.520339 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.547545 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:05Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 06:01:05.685655 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:05.685759 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:05.685771 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:05.685842 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 06:01:05.685851 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 06:01:05.685874 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:05.685946 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:05.685987 6125 factory.go:656] Stopping watch factory\\\\nI0203 06:01:05.686019 6125 ovnkube.go:599] Stopped ovnkube\\\\nI0203 06:01:05.686068 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:05.686092 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:05.686116 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 06:01:05.686131 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:05.686145 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:05.686171 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.563815 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.582584 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.587533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.587606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.587622 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.587646 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.587666 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.599727 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.615431 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.629368 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.651825 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.663485 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.677856 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:07Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.690183 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.690235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.690255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.690282 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.690298 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.793562 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.793620 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.793633 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.793656 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.793671 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.896905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.896995 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.897016 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.897039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:07 crc kubenswrapper[4872]: I0203 06:01:07.897058 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:07Z","lastTransitionTime":"2026-02-03T06:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.000218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.000272 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.000286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.000302 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.000328 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.060734 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:55:25.866263067 +0000 UTC Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.103573 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.103681 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.103734 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.103769 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.103790 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.208180 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.208436 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.208519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.208598 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.208670 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.312726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.312806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.312832 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.312862 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.312880 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.406335 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.415069 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.415164 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.415222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.415246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.415265 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.430399 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.450254 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/1.log" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.452418 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.452915 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/0.log" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.457334 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1" exitCode=1 Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.457413 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.457490 4872 scope.go:117] "RemoveContainer" containerID="c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.459535 4872 scope.go:117] "RemoveContainer" containerID="1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1" Feb 03 06:01:08 crc kubenswrapper[4872]: E0203 06:01:08.465931 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.480132 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.503763 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.518193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.518275 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.518300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.518331 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.518354 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.523077 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.551791 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.572033 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.597288 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.619872 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.622055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.622267 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.622415 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.622552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.622670 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.653744 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:05Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 06:01:05.685655 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:05.685759 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:05.685771 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:05.685842 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 06:01:05.685851 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 06:01:05.685874 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:05.685946 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:05.685987 6125 factory.go:656] Stopping watch factory\\\\nI0203 06:01:05.686019 6125 ovnkube.go:599] Stopped ovnkube\\\\nI0203 06:01:05.686068 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:05.686092 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:05.686116 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 06:01:05.686131 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:05.686145 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:05.686171 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.676366 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.697373 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.715812 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.726120 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.726302 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.726437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.726573 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.726732 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.734991 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.755402 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.775191 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.798113 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.811408 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.824435 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.829216 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.829247 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.829256 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.829270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.829281 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.847406 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:05Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 06:01:05.685655 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:05.685759 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:05.685771 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:05.685842 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 06:01:05.685851 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 06:01:05.685874 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:05.685946 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:05.685987 6125 factory.go:656] Stopping watch factory\\\\nI0203 06:01:05.686019 6125 ovnkube.go:599] Stopped ovnkube\\\\nI0203 06:01:05.686068 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:05.686092 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:05.686116 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 06:01:05.686131 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:05.686145 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:05.686171 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.870794 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.885186 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.901927 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.916028 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.932372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.932424 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.932442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.932467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.932484 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:08Z","lastTransitionTime":"2026-02-03T06:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.938093 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.955834 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:08 crc kubenswrapper[4872]: I0203 06:01:08.976464 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.000772 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:08Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.035334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.035372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.035383 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.035400 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.035415 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.061337 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:24:47.174032448 +0000 UTC Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.122086 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:09 crc kubenswrapper[4872]: E0203 06:01:09.122454 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.122162 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:09 crc kubenswrapper[4872]: E0203 06:01:09.122788 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.122086 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:09 crc kubenswrapper[4872]: E0203 06:01:09.123126 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.138133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.138192 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.138214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.138279 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.138299 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.241319 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.241405 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.241427 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.241454 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.241481 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.345933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.346026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.346054 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.346086 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.346106 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.449720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.449779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.449796 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.449820 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.449838 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.460053 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4"] Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.460939 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.464407 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.464450 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.469384 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/1.log" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.488348 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.508387 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.530942 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.552643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.552723 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.552743 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.552767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.552784 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.557872 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.583745 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.587349 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a830583-53f4-48c2-9120-c57c2c4b81e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.587423 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgmc\" (UniqueName: \"kubernetes.io/projected/8a830583-53f4-48c2-9120-c57c2c4b81e3-kube-api-access-fxgmc\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.587521 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a830583-53f4-48c2-9120-c57c2c4b81e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.587629 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a830583-53f4-48c2-9120-c57c2c4b81e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.604154 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.625581 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.643401 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.656329 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.656418 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.656445 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.656479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.656505 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.664182 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.686648 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.689341 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a830583-53f4-48c2-9120-c57c2c4b81e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.689416 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a830583-53f4-48c2-9120-c57c2c4b81e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.689551 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a830583-53f4-48c2-9120-c57c2c4b81e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.689609 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgmc\" (UniqueName: \"kubernetes.io/projected/8a830583-53f4-48c2-9120-c57c2c4b81e3-kube-api-access-fxgmc\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.690970 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8a830583-53f4-48c2-9120-c57c2c4b81e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.691673 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8a830583-53f4-48c2-9120-c57c2c4b81e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.698525 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8a830583-53f4-48c2-9120-c57c2c4b81e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.724534 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c06a085b7667022f2dbe81f4fe5cf7083c57f1cdbfd400158ece3905f74e4690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:05Z\\\",\\\"message\\\":\\\"all/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 06:01:05.685655 6125 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:05.685759 6125 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:05.685771 6125 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:05.685842 6125 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 06:01:05.685851 6125 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 06:01:05.685874 6125 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:05.685946 6125 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:05.685987 6125 factory.go:656] Stopping watch factory\\\\nI0203 06:01:05.686019 6125 ovnkube.go:599] Stopped ovnkube\\\\nI0203 06:01:05.686068 6125 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:05.686092 6125 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:05.686116 6125 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 06:01:05.686131 6125 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:05.686145 6125 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:05.686171 6125 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.729220 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgmc\" (UniqueName: \"kubernetes.io/projected/8a830583-53f4-48c2-9120-c57c2c4b81e3-kube-api-access-fxgmc\") pod \"ovnkube-control-plane-749d76644c-nrgq4\" (UID: \"8a830583-53f4-48c2-9120-c57c2c4b81e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.744082 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.759128 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.759830 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.759901 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.759918 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.759940 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.759958 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.774382 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.786745 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.797212 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: W0203 06:01:09.808782 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a830583_53f4_48c2_9120_c57c2c4b81e3.slice/crio-184efefdcb32939357966b442ca74e75afe3008b235c1d24c3ebdd63126cd46f WatchSource:0}: Error finding container 184efefdcb32939357966b442ca74e75afe3008b235c1d24c3ebdd63126cd46f: Status 404 returned error can't find the container with id 184efefdcb32939357966b442ca74e75afe3008b235c1d24c3ebdd63126cd46f Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.835424 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.837072 4872 scope.go:117] "RemoveContainer" containerID="1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1" Feb 03 06:01:09 crc kubenswrapper[4872]: E0203 06:01:09.837310 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.860179 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.862725 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.862966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.863148 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.863340 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.863579 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.876232 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.893350 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.910622 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.926955 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.946923 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.958633 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.966060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.966097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.966108 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.966126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.966141 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:09Z","lastTransitionTime":"2026-02-03T06:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.973369 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:09 crc kubenswrapper[4872]: I0203 06:01:09.988895 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:09Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.007438 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.019839 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.034116 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.048352 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.061744 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:02:58.169262777 +0000 UTC Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.061797 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.069152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.069174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.069184 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.069201 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.069213 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.091292 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.136735 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.153638 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.168065 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.173593 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.173627 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.173639 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.173654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.173665 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.182566 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.200526 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.230739 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.242912 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-drpfn"] Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.243451 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:10 crc kubenswrapper[4872]: E0203 06:01:10.243582 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.248204 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.260390 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.271662 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.276287 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.276326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.276346 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.276364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.276375 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.287817 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.296490 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.296534 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gxdk\" (UniqueName: \"kubernetes.io/projected/a14ad474-acae-486b-bac9-e5e20cc8ec2e-kube-api-access-5gxdk\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.314142 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.332326 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.347026 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.356666 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.378352 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.378575 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.378641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.378762 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.378828 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.397605 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.397649 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gxdk\" (UniqueName: \"kubernetes.io/projected/a14ad474-acae-486b-bac9-e5e20cc8ec2e-kube-api-access-5gxdk\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:10 crc kubenswrapper[4872]: E0203 06:01:10.397864 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:10 crc kubenswrapper[4872]: E0203 06:01:10.397968 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs podName:a14ad474-acae-486b-bac9-e5e20cc8ec2e nodeName:}" failed. No retries permitted until 2026-02-03 06:01:10.897945502 +0000 UTC m=+41.480636926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs") pod "network-metrics-daemon-drpfn" (UID: "a14ad474-acae-486b-bac9-e5e20cc8ec2e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.400150 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.419178 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gxdk\" (UniqueName: \"kubernetes.io/projected/a14ad474-acae-486b-bac9-e5e20cc8ec2e-kube-api-access-5gxdk\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.428971 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.439669 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.454299 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.465907 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.477171 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.480199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.480231 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.480240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.480254 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.480263 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.481202 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" event={"ID":"8a830583-53f4-48c2-9120-c57c2c4b81e3","Type":"ContainerStarted","Data":"cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.481249 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" event={"ID":"8a830583-53f4-48c2-9120-c57c2c4b81e3","Type":"ContainerStarted","Data":"b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.481268 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" event={"ID":"8a830583-53f4-48c2-9120-c57c2c4b81e3","Type":"ContainerStarted","Data":"184efefdcb32939357966b442ca74e75afe3008b235c1d24c3ebdd63126cd46f"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.489773 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.501946 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.515190 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.526603 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.552857 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.564728 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.578579 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.583030 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.583137 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.583216 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.583293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.583374 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.592831 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.605324 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.619841 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.639123 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.654482 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.673048 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.686201 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.686404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.686499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.686587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.686668 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.688982 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.704921 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.721732 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.744517 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.762285 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.778349 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.789356 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.789506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.789603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.789712 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.789827 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.799634 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.821970 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.841566 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.868663 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.892850 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.892906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.892925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.892949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.892965 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.893383 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.906980 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:10 crc kubenswrapper[4872]: E0203 06:01:10.907286 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:10 crc kubenswrapper[4872]: E0203 06:01:10.907395 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs podName:a14ad474-acae-486b-bac9-e5e20cc8ec2e nodeName:}" failed. No retries permitted until 2026-02-03 06:01:11.907365846 +0000 UTC m=+42.490057300 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs") pod "network-metrics-daemon-drpfn" (UID: "a14ad474-acae-486b-bac9-e5e20cc8ec2e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.915818 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.932926 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.950336 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:10Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.996534 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.996579 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.996592 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.996608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:10 crc kubenswrapper[4872]: I0203 06:01:10.996621 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:10Z","lastTransitionTime":"2026-02-03T06:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.062246 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:33:49.658716173 +0000 UTC Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.100183 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.100242 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.100259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.100284 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.100304 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.122162 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.122217 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:11 crc kubenswrapper[4872]: E0203 06:01:11.122305 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:11 crc kubenswrapper[4872]: E0203 06:01:11.122408 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.122650 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:11 crc kubenswrapper[4872]: E0203 06:01:11.123029 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.202928 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.202965 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.202974 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.202989 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.202999 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.306290 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.306348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.306365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.306389 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.306407 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.409657 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.409741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.409760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.409786 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.409803 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.512621 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.512680 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.512728 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.512752 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.512773 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.616723 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.616760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.616776 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.616800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.616817 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.719573 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.719663 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.719674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.719706 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.719720 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.822930 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.822975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.822994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.823018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.823035 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.917730 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:11 crc kubenswrapper[4872]: E0203 06:01:11.917974 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:11 crc kubenswrapper[4872]: E0203 06:01:11.918089 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs podName:a14ad474-acae-486b-bac9-e5e20cc8ec2e nodeName:}" failed. No retries permitted until 2026-02-03 06:01:13.91806228 +0000 UTC m=+44.500753724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs") pod "network-metrics-daemon-drpfn" (UID: "a14ad474-acae-486b-bac9-e5e20cc8ec2e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.926154 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.926225 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.926244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.926267 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:11 crc kubenswrapper[4872]: I0203 06:01:11.926285 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:11Z","lastTransitionTime":"2026-02-03T06:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.028770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.028835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.028852 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.028944 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.028963 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.063096 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:33:57.550539958 +0000 UTC Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.121733 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:12 crc kubenswrapper[4872]: E0203 06:01:12.121916 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.131118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.131185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.131208 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.131237 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.131260 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.233898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.233947 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.233966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.233989 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.234007 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.337437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.337536 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.337555 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.337581 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.337599 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.439960 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.440050 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.440068 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.440093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.440110 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.544462 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.544517 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.544534 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.544556 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.544578 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.648363 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.648418 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.648440 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.648479 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.648498 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.752060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.752126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.752143 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.752169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.752186 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.856018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.856086 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.856104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.856134 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.856151 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.959338 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.959395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.959412 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.959437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:12 crc kubenswrapper[4872]: I0203 06:01:12.959454 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:12Z","lastTransitionTime":"2026-02-03T06:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.062489 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.062550 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.062572 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.062598 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.062616 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.063634 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:34:55.145516455 +0000 UTC Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.122127 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.122174 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.122232 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:13 crc kubenswrapper[4872]: E0203 06:01:13.122292 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:13 crc kubenswrapper[4872]: E0203 06:01:13.122458 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:13 crc kubenswrapper[4872]: E0203 06:01:13.122593 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.166777 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.166829 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.166847 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.166871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.166888 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.270234 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.270306 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.270326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.270354 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.270374 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.373310 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.373375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.373399 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.373429 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.373500 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.476875 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.476939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.476958 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.476982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.476999 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.580402 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.580466 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.580486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.580511 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.580528 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.683671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.683749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.683766 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.683788 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.683804 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.787387 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.787867 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.788097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.788296 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.788547 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.892152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.892236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.892297 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.892324 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.892340 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.941186 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:13 crc kubenswrapper[4872]: E0203 06:01:13.941343 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:13 crc kubenswrapper[4872]: E0203 06:01:13.941747 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs podName:a14ad474-acae-486b-bac9-e5e20cc8ec2e nodeName:}" failed. No retries permitted until 2026-02-03 06:01:17.941729829 +0000 UTC m=+48.524421243 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs") pod "network-metrics-daemon-drpfn" (UID: "a14ad474-acae-486b-bac9-e5e20cc8ec2e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.995986 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.996034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.996055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.996085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:13 crc kubenswrapper[4872]: I0203 06:01:13.996107 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:13Z","lastTransitionTime":"2026-02-03T06:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.064280 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:45:36.412470809 +0000 UTC Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.098756 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.098848 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.098866 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.098891 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.098911 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.121743 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:14 crc kubenswrapper[4872]: E0203 06:01:14.121923 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.201328 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.201378 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.201399 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.201421 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.201437 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.304843 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.304910 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.304933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.304968 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.304989 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.408286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.408332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.408352 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.408374 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.408391 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.512417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.512467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.512482 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.512510 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.512526 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.615223 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.615541 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.615654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.615812 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.615953 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.719143 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.719202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.719220 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.719243 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.719260 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.822183 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.822246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.822265 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.822289 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.822307 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.924351 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.924416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.924438 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.924470 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:14 crc kubenswrapper[4872]: I0203 06:01:14.924509 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:14Z","lastTransitionTime":"2026-02-03T06:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.027587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.027660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.027677 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.027733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.027752 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.065570 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:06:14.248946798 +0000 UTC Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.122170 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.122236 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:15 crc kubenswrapper[4872]: E0203 06:01:15.122340 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.122842 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:15 crc kubenswrapper[4872]: E0203 06:01:15.122995 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:15 crc kubenswrapper[4872]: E0203 06:01:15.123294 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.132806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.132908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.132927 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.133003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.133021 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.235337 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.235444 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.235463 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.235484 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.235500 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.337995 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.338035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.338046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.338061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.338072 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.441011 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.441069 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.441083 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.441102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.441114 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.546741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.547044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.547175 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.547309 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.547410 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.650534 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.650610 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.650641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.650673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.650740 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.754004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.754393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.754566 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.754772 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.754949 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.857760 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.857809 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.857824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.857844 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.857860 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.961327 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.961602 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.961713 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.961816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:15 crc kubenswrapper[4872]: I0203 06:01:15.961898 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:15Z","lastTransitionTime":"2026-02-03T06:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.064849 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.064913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.064931 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.064955 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.064973 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.065759 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:35:13.835363815 +0000 UTC Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.122109 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:16 crc kubenswrapper[4872]: E0203 06:01:16.122316 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.167327 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.167374 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.167385 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.167400 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.167412 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.271116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.271173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.271191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.271214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.271232 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.374720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.375033 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.375210 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.375348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.375465 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.477665 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.477749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.477766 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.477791 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.477810 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.580604 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.580659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.580670 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.580705 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.580717 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.682928 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.682973 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.682991 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.683014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.683030 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.684824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.684894 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.684920 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.684951 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.684973 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: E0203 06:01:16.705217 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:16Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.708939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.708967 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.708978 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.708994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.709005 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: E0203 06:01:16.728042 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:16Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.765026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.765078 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.765094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.765116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.765135 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: E0203 06:01:16.785918 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:16Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.790960 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.791008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.791025 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.791044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.791059 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: E0203 06:01:16.816965 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:16Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.822521 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.822575 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.822592 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.822615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.822632 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: E0203 06:01:16.841972 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:16Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:16 crc kubenswrapper[4872]: E0203 06:01:16.842200 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.844654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.844748 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.844772 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.844806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.844829 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.947596 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.947653 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.947674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.947732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:16 crc kubenswrapper[4872]: I0203 06:01:16.947753 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:16Z","lastTransitionTime":"2026-02-03T06:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.052743 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.052819 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.052841 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.052870 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.052890 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.067226 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:19:37.668284426 +0000 UTC Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.122179 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.122251 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.122195 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:17 crc kubenswrapper[4872]: E0203 06:01:17.122365 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:17 crc kubenswrapper[4872]: E0203 06:01:17.122490 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:17 crc kubenswrapper[4872]: E0203 06:01:17.122561 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.156145 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.156217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.156241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.156273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.156293 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.259636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.259728 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.259745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.259770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.259787 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.362845 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.362891 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.362903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.362952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.362966 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.465417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.465481 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.465498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.465523 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.465541 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.485621 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.495659 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.503456 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.526610 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.550515 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.568860 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.568925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.568942 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.568968 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.568988 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.569423 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.588103 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.604726 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.618500 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.629629 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.646229 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.659323 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.671589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.671619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.671626 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.671641 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.671649 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.671648 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.692195 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.709339 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.723748 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.739802 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.757606 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:17Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.774467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.774679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.774857 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.775022 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.775153 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.878214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.878526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.878680 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.878842 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.878996 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.981666 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.981741 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.981758 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.981781 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.981797 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:17Z","lastTransitionTime":"2026-02-03T06:01:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:17 crc kubenswrapper[4872]: I0203 06:01:17.985178 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:17 crc kubenswrapper[4872]: E0203 06:01:17.985365 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:17 crc kubenswrapper[4872]: E0203 06:01:17.985431 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs podName:a14ad474-acae-486b-bac9-e5e20cc8ec2e nodeName:}" failed. No retries permitted until 2026-02-03 06:01:25.985414432 +0000 UTC m=+56.568105856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs") pod "network-metrics-daemon-drpfn" (UID: "a14ad474-acae-486b-bac9-e5e20cc8ec2e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.067952 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:44:21.217962643 +0000 UTC Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.084838 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.084913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.084932 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.084957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.084974 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.122750 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:18 crc kubenswrapper[4872]: E0203 06:01:18.123074 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.187655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.188060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.188235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.188444 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.188580 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.291970 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.292019 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.292035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.292056 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.292074 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.395440 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.395829 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.395966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.396138 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.396258 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.499857 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.500238 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.500392 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.500538 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.500664 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.604568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.604638 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.604663 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.604729 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.604750 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.707793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.707840 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.707859 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.707883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.707921 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.811667 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.812177 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.812287 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.812405 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.812518 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.915532 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.915603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.915621 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.915646 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:18 crc kubenswrapper[4872]: I0203 06:01:18.915664 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:18Z","lastTransitionTime":"2026-02-03T06:01:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.019935 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.019985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.020001 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.020024 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.020040 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.068333 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:07:34.850247947 +0000 UTC Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.121713 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.121761 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.121842 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:19 crc kubenswrapper[4872]: E0203 06:01:19.121868 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:19 crc kubenswrapper[4872]: E0203 06:01:19.122036 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:19 crc kubenswrapper[4872]: E0203 06:01:19.122179 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.123284 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.123383 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.123412 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.123440 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.123462 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.226312 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.226415 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.226432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.226462 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.226481 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.329964 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.330019 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.330035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.330058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.330074 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.433437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.433490 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.433507 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.433529 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.433546 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.536324 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.536375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.536393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.536418 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.536437 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.639612 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.640052 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.640229 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.640428 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.640575 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.743283 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.743328 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.743345 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.743369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.743385 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.846614 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.846725 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.846744 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.846767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.846785 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.949506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.949919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.950057 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.950194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:19 crc kubenswrapper[4872]: I0203 06:01:19.950308 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:19Z","lastTransitionTime":"2026-02-03T06:01:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.053733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.053788 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.053818 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.053841 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.053862 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.068783 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:45:08.461770439 +0000 UTC Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.122539 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:20 crc kubenswrapper[4872]: E0203 06:01:20.122787 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.144485 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.156545 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.156609 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.156626 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.156652 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.156671 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.159126 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.176321 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.192881 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.207241 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.231521 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.251440 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.259615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.259893 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.260001 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.260093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.260171 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.271470 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.287996 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.305718 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.332275 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.358172 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.362814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.362854 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.362871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.362892 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.362909 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.382780 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.398139 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.413109 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.432742 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.464400 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:20Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.465101 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.465145 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.465155 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.465170 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.465180 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.568463 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.568512 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.568529 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.568551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.568567 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.670996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.671039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.671051 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.671067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.671080 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.773955 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.774009 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.774020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.774036 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.774048 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.876387 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.876445 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.876461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.876485 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.876502 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.979357 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.979414 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.979430 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.979458 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:20 crc kubenswrapper[4872]: I0203 06:01:20.979476 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:20Z","lastTransitionTime":"2026-02-03T06:01:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.069456 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:10:28.02438876 +0000 UTC Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.082368 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.082430 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.082446 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.082476 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.082515 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.122706 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.122839 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.122879 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.122710 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.123032 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.123100 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.185556 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.185607 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.185624 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.185647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.185665 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.223835 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.224072 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:01:53.223999729 +0000 UTC m=+83.806691183 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.288163 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.288253 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.288272 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.288296 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.288312 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.325075 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.325170 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.325243 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.325309 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325359 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325402 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325406 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325422 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325468 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325514 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:53.325488636 +0000 UTC m=+83.908180080 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325535 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325577 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325591 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325544 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:53.325531917 +0000 UTC m=+83.908223371 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325651 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:53.325618639 +0000 UTC m=+83.908310073 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:01:21 crc kubenswrapper[4872]: E0203 06:01:21.325719 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 06:01:53.325675151 +0000 UTC m=+83.908366685 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.391454 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.391508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.391527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.391549 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.391565 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.495482 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.495544 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.495568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.495643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.495667 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.598616 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.598666 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.598676 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.598717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.598729 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.701860 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.701925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.701946 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.701972 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.701992 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.804878 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.804949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.804972 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.804999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.805019 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.908445 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.908832 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.908960 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.909095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:21 crc kubenswrapper[4872]: I0203 06:01:21.909210 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:21Z","lastTransitionTime":"2026-02-03T06:01:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.012888 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.013192 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.013325 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.013498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.013618 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.070015 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:54:43.04890641 +0000 UTC Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.116096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.116151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.116169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.116191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.116207 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.122724 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:22 crc kubenswrapper[4872]: E0203 06:01:22.122883 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.218344 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.218391 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.218407 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.218428 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.218444 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.321131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.321184 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.321200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.321224 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.321241 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.424572 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.424620 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.424642 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.424669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.424713 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.528136 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.528200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.528216 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.528240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.528256 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.631102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.631160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.631177 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.631202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.631219 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.734185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.734226 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.734235 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.734248 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.734258 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.837665 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.837733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.837744 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.837764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.837777 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.942297 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.942344 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.942354 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.942372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:22 crc kubenswrapper[4872]: I0203 06:01:22.942383 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:22Z","lastTransitionTime":"2026-02-03T06:01:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.044312 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.044382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.044406 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.044433 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.044454 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.071799 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:15:11.421500059 +0000 UTC Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.122031 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.122122 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.122122 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:23 crc kubenswrapper[4872]: E0203 06:01:23.122290 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:23 crc kubenswrapper[4872]: E0203 06:01:23.122484 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:23 crc kubenswrapper[4872]: E0203 06:01:23.122575 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.147654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.147711 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.147721 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.147733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.147742 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.250182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.250315 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.250355 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.250377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.250394 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.352996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.353381 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.353534 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.353725 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.353870 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.458034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.458080 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.458096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.458118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.458136 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.561948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.562279 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.562464 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.562611 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.562779 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.666303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.666616 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.666862 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.667041 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.667194 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.770144 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.770449 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.770523 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.770587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.770799 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.874377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.874443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.874459 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.874485 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.874502 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.979301 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.979654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.979992 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.980200 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:23 crc kubenswrapper[4872]: I0203 06:01:23.980395 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:23Z","lastTransitionTime":"2026-02-03T06:01:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.071936 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:08:19.76251316 +0000 UTC Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.083814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.084010 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.084411 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.084800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.084950 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.122230 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:24 crc kubenswrapper[4872]: E0203 06:01:24.122635 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.187780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.187855 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.187868 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.187885 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.187896 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.291989 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.292090 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.292129 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.292153 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.292172 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.395162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.395236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.395254 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.395280 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.395299 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.498466 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.498520 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.498539 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.498561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.498578 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.602375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.602441 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.602463 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.602489 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.602513 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.705107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.705165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.705186 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.705216 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.705238 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.807764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.807816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.807827 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.807847 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.807861 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.911202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.911265 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.911281 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.911305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:24 crc kubenswrapper[4872]: I0203 06:01:24.911322 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:24Z","lastTransitionTime":"2026-02-03T06:01:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.013868 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.013919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.013934 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.013952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.013963 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.072838 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:27:28.870920552 +0000 UTC Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.117272 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.117333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.117349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.117373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.117390 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.122535 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.122635 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.122635 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:25 crc kubenswrapper[4872]: E0203 06:01:25.122911 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:25 crc kubenswrapper[4872]: E0203 06:01:25.123228 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:25 crc kubenswrapper[4872]: E0203 06:01:25.124047 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.124404 4872 scope.go:117] "RemoveContainer" containerID="1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.219879 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.220140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.220151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.220169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.220180 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.323127 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.323158 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.323174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.323198 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.323214 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.426100 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.426156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.426173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.426195 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.426213 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.529420 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.529474 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.529488 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.529509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.529524 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.548990 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/1.log" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.552302 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.552930 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.585666 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.606020 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.626576 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.632325 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.632357 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.632369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.632389 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.632402 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.653848 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.673230 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.687787 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.702446 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.718254 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.728335 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.734507 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.734542 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.734552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.734567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.734576 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.740963 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.759505 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.801341 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.836526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.836571 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.836582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.836601 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.836615 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.845034 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.862052 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.876405 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.887488 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.898254 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:25Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.939758 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.939802 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.939813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.939829 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:25 crc kubenswrapper[4872]: I0203 06:01:25.939840 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:25Z","lastTransitionTime":"2026-02-03T06:01:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.042346 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.042402 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.042418 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.042442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.042457 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.073079 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:04:50.097908052 +0000 UTC Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.073450 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:26 crc kubenswrapper[4872]: E0203 06:01:26.073553 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:26 crc kubenswrapper[4872]: E0203 06:01:26.073600 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs podName:a14ad474-acae-486b-bac9-e5e20cc8ec2e nodeName:}" failed. No retries permitted until 2026-02-03 06:01:42.073586195 +0000 UTC m=+72.656277609 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs") pod "network-metrics-daemon-drpfn" (UID: "a14ad474-acae-486b-bac9-e5e20cc8ec2e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.122762 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:26 crc kubenswrapper[4872]: E0203 06:01:26.122986 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.145320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.145402 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.145425 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.145453 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.145472 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.247980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.248041 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.248058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.248085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.248103 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.351594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.351647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.351663 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.351706 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.351721 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.455251 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.455342 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.455367 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.455396 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.455418 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.557833 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.557881 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.557903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.557929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.557948 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.559849 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/2.log" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.560858 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/1.log" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.564334 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348" exitCode=1 Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.564382 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.564424 4872 scope.go:117] "RemoveContainer" containerID="1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.565549 4872 scope.go:117] "RemoveContainer" containerID="47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348" Feb 03 06:01:26 crc kubenswrapper[4872]: E0203 06:01:26.565850 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.591488 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.617245 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.634553 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.647815 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.660899 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.660983 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.661003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.661028 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.661046 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.666805 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.683025 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.699644 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.714542 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.735967 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.758249 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.764477 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.764534 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.764547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.764564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.764576 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.787271 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.814771 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1befe87d775821b4f3c89aafc097b7ccfd25b91b6c65f2f344deeb51d08899a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:07Z\\\",\\\"message\\\":\\\"I0203 06:01:07.801489 6267 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0203 06:01:07.801510 6267 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 06:01:07.801537 6267 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 06:01:07.801551 6267 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 06:01:07.801584 6267 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 06:01:07.801609 6267 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 06:01:07.801643 6267 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 06:01:07.801653 6267 factory.go:656] Stopping watch factory\\\\nI0203 06:01:07.801659 6267 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 06:01:07.801675 6267 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 06:01:07.801713 6267 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 06:01:07.801715 6267 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0203 06:01:07.801673 6267 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0203 06:01:07.801734 6267 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 06:01:07.801744 6267 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 06:01:07.801751 6267 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.832637 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.851778 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.867345 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.867386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.867402 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.867425 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.867443 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.871840 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.890809 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.911202 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:26Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.969636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.969671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.969680 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.969710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:26 crc kubenswrapper[4872]: I0203 06:01:26.969722 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:26Z","lastTransitionTime":"2026-02-03T06:01:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.072000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.072040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.072052 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.072071 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.072082 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.073219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.073211 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:57:43.393684747 +0000 UTC Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.073276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.073297 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.073320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.073338 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.093392 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.098106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.098146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.098159 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.098175 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.098187 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.113590 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.118170 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.118221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.118238 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.118262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.118280 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.122928 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.122969 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.123082 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.123112 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.123390 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.123288 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.138032 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.143613 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.143666 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.143748 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.143775 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.143793 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.162140 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.167590 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.167642 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.167660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.167684 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.167727 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.186530 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.186800 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.188710 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.188770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.188789 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.188814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.188833 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.291434 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.291494 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.291511 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.291535 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.291554 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.394156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.394207 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.394226 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.394250 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.394266 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.497383 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.497460 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.497482 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.497513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.497536 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.570738 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/2.log" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.579133 4872 scope.go:117] "RemoveContainer" containerID="47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348" Feb 03 06:01:27 crc kubenswrapper[4872]: E0203 06:01:27.579576 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.600894 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.600960 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.600979 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.601003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.600926 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.601021 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.625797 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.645820 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.665592 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.683228 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.702225 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.703995 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.704040 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.704058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.704082 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.704104 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.724073 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.755019 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.774030 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.796657 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.809417 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.809475 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.809493 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.809516 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.809533 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.814964 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.831340 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.845898 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.870307 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.890145 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.906364 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.912122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.912180 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.912198 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.912221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.912239 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:27Z","lastTransitionTime":"2026-02-03T06:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:27 crc kubenswrapper[4872]: I0203 06:01:27.919426 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:27Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.015187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.015231 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.015248 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.015270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.015290 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.073803 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:30:08.362103316 +0000 UTC Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.118012 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.118069 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.118085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.118109 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.118128 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.122339 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:28 crc kubenswrapper[4872]: E0203 06:01:28.122556 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.221079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.221136 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.221154 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.221177 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.221194 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.323505 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.323558 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.323577 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.323602 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.323621 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.427015 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.427072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.427089 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.427113 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.427133 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.530023 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.530075 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.530095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.530118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.530137 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.633241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.633291 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.633309 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.633332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.633349 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.736835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.736886 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.736903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.736928 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.736945 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.840135 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.840203 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.840220 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.840245 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.840265 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.942437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.942467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.942475 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.942486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:28 crc kubenswrapper[4872]: I0203 06:01:28.942495 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:28Z","lastTransitionTime":"2026-02-03T06:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.045342 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.045402 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.045419 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.045444 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.045461 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.074851 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:24:11.617742033 +0000 UTC Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.122502 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.122546 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.122502 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:29 crc kubenswrapper[4872]: E0203 06:01:29.122738 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:29 crc kubenswrapper[4872]: E0203 06:01:29.122930 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:29 crc kubenswrapper[4872]: E0203 06:01:29.123014 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.148058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.148116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.148134 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.148160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.148178 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.251254 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.251302 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.251318 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.251341 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.251357 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.354303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.354359 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.354375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.354416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.354435 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.456782 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.456838 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.456855 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.456879 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.456897 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.560553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.560681 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.560737 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.560761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.560778 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.664092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.664147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.664165 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.664189 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.664208 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.767066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.767128 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.767146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.767169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.767188 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.871018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.871077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.871093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.871117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.871134 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.974393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.974448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.974464 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.974489 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:29 crc kubenswrapper[4872]: I0203 06:01:29.974506 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:29Z","lastTransitionTime":"2026-02-03T06:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.075579 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:09:55.183608152 +0000 UTC Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.079517 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.079582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.079600 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.079626 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.079644 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.123014 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:30 crc kubenswrapper[4872]: E0203 06:01:30.123336 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.146569 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.165053 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.185270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.185429 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.185454 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.185478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.185498 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.185620 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.204678 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.236055 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.254238 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.270313 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.284972 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.288244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.288332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.288354 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.288409 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.288449 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.300329 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.314289 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.331101 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.353069 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.373096 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.390762 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.392515 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.392738 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.392763 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.392927 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.392952 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.412260 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.438816 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.465818 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:30Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.496221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.496410 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.496511 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.496594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.496676 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.599541 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.599628 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.599647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.599672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.599726 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.702984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.703031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.703046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.703066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.703080 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.806000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.806046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.806062 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.806084 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.806100 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.909297 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.909355 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.909372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.909397 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:30 crc kubenswrapper[4872]: I0203 06:01:30.909414 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:30Z","lastTransitionTime":"2026-02-03T06:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.012554 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.012782 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.012925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.013091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.013214 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.076037 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:05:47.764580634 +0000 UTC Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.116448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.116493 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.116510 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.116533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.116550 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.121855 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.121908 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:31 crc kubenswrapper[4872]: E0203 06:01:31.122015 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.121862 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:31 crc kubenswrapper[4872]: E0203 06:01:31.122172 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:31 crc kubenswrapper[4872]: E0203 06:01:31.122266 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.219977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.220025 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.220043 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.220064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.220081 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.323166 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.323202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.323210 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.323223 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.323232 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.425844 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.425923 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.425944 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.425975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.425996 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.528920 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.528985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.529004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.529052 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.529071 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.632151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.632214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.632233 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.632258 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.632277 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.735125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.735185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.735203 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.735227 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.735250 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.838567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.838623 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.838640 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.838663 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.838681 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.941058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.941126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.941139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.941155 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:31 crc kubenswrapper[4872]: I0203 06:01:31.941166 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:31Z","lastTransitionTime":"2026-02-03T06:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.043905 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.043966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.043982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.044004 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.044019 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.076710 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:58:55.149963074 +0000 UTC Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.122300 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:32 crc kubenswrapper[4872]: E0203 06:01:32.122502 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.146510 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.146555 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.146564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.146574 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.146583 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.250008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.250074 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.250092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.250117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.250136 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.353034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.353092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.353108 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.353130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.353147 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.455558 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.455619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.455637 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.455661 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.455681 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.558680 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.558761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.558778 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.558802 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.558823 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.661281 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.661321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.661333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.661351 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.661363 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.764859 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.764924 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.764947 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.764975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.764996 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.868359 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.868420 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.868436 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.868464 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.868486 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.971562 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.971619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.971638 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.971661 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:32 crc kubenswrapper[4872]: I0203 06:01:32.971679 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:32Z","lastTransitionTime":"2026-02-03T06:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.074826 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.074891 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.074910 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.074935 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.074953 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.077042 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:03:04.685581154 +0000 UTC Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.122465 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:33 crc kubenswrapper[4872]: E0203 06:01:33.122645 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.122973 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:33 crc kubenswrapper[4872]: E0203 06:01:33.123067 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.123245 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:33 crc kubenswrapper[4872]: E0203 06:01:33.123325 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.177439 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.177497 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.177514 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.177537 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.177554 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.280558 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.280608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.280625 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.280649 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.280666 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.382636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.382676 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.382703 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.382718 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.382729 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.485471 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.485568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.485635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.485662 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.485741 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.587908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.587954 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.587967 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.587984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.587995 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.691300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.691337 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.691344 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.691360 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.691369 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.794525 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.794562 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.794572 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.794586 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.794597 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.897431 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.897467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.897478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.897494 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.897506 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.999788 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.999833 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.999846 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.999864 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:33 crc kubenswrapper[4872]: I0203 06:01:33.999876 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:33Z","lastTransitionTime":"2026-02-03T06:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.077933 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:02:22.823169253 +0000 UTC Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.101676 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.101754 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.101771 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.101795 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.101814 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.122009 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:34 crc kubenswrapper[4872]: E0203 06:01:34.122196 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.204612 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.204656 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.204664 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.204694 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.204706 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.307903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.307952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.307963 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.307980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.307991 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.410530 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.410561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.410572 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.410588 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.410599 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.513821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.513862 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.513871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.513886 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.513895 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.615703 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.615735 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.615745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.615758 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.615766 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.717585 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.717648 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.717665 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.717728 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.717748 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.819892 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.819926 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.819940 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.819957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.819969 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.921813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.921846 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.921854 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.921867 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:34 crc kubenswrapper[4872]: I0203 06:01:34.921875 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:34Z","lastTransitionTime":"2026-02-03T06:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.024169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.024234 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.024246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.024264 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.024277 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.078978 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:10:10.802984599 +0000 UTC Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.122550 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:35 crc kubenswrapper[4872]: E0203 06:01:35.122658 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.122824 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:35 crc kubenswrapper[4872]: E0203 06:01:35.122904 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.123001 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:35 crc kubenswrapper[4872]: E0203 06:01:35.123050 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.125727 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.125749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.125758 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.125770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.125778 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.227464 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.227499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.227509 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.227524 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.227539 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.329246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.329277 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.329287 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.329304 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.329315 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.432346 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.432392 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.432411 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.432434 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.432450 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.534387 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.534429 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.534445 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.534466 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.534481 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.637006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.637062 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.637085 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.637113 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.637137 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.739794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.739836 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.739844 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.739858 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.739893 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.842569 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.842610 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.842618 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.842631 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.842640 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.947309 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.947349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.947357 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.947370 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:35 crc kubenswrapper[4872]: I0203 06:01:35.947379 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:35Z","lastTransitionTime":"2026-02-03T06:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.050053 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.050090 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.050099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.050113 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.050123 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.079644 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:32:28.759138679 +0000 UTC Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.125887 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:36 crc kubenswrapper[4872]: E0203 06:01:36.125992 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.151966 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.152020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.152039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.152063 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.152081 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.254782 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.254857 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.254882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.254906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.254925 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.357659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.357708 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.357717 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.357730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.357740 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.459535 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.459571 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.459582 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.459597 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.459630 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.562779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.562835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.562852 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.562876 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.562895 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.665109 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.665171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.665181 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.665195 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.665204 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.768056 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.768094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.768106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.768122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.768133 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.871222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.871720 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.871829 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.871924 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.872026 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.973588 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.973613 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.973636 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.973648 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:36 crc kubenswrapper[4872]: I0203 06:01:36.973657 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:36Z","lastTransitionTime":"2026-02-03T06:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.076155 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.076191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.076202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.076217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.076228 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.080601 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:14:03.668978877 +0000 UTC Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.122011 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.122046 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.122120 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.122177 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.122341 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.122496 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.177780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.178102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.178293 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.178486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.178673 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.280825 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.280879 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.280888 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.280902 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.280913 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.382512 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.382539 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.382549 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.382561 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.382570 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.484350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.484522 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.484589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.484667 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.484939 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.559201 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.559253 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.559274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.559302 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.559322 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.573348 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:37Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.584824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.584857 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.584865 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.584878 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.584888 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.597209 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:37Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.600436 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.600639 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.600933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.601152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.601345 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.613262 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:37Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.617443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.617477 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.617486 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.617503 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.617515 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.629103 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:37Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.632767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.632826 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.632852 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.632880 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.632924 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.645151 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:37Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:37 crc kubenswrapper[4872]: E0203 06:01:37.645307 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.647098 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.647132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.647144 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.647161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.647173 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.748856 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.748906 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.748922 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.748945 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.748962 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.851976 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.852013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.852023 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.852038 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.852050 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.954919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.954954 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.954961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.954975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:37 crc kubenswrapper[4872]: I0203 06:01:37.954983 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:37Z","lastTransitionTime":"2026-02-03T06:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.057764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.057800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.057810 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.057826 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.057835 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.081261 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:56:47.026007841 +0000 UTC Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.121826 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:38 crc kubenswrapper[4872]: E0203 06:01:38.122003 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.160125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.160182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.160195 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.160211 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.160221 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.262143 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.262185 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.262194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.262206 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.262215 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.363743 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.363761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.363769 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.363782 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.363791 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.465462 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.465506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.465519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.465533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.465546 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.567935 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.567980 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.567995 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.568014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.568026 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.669904 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.669952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.669961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.669975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.669986 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.772319 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.772363 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.772374 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.772388 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.772399 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.874353 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.874386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.874395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.874422 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.874437 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.977735 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.977808 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.977820 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.977835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:38 crc kubenswrapper[4872]: I0203 06:01:38.977885 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:38Z","lastTransitionTime":"2026-02-03T06:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.080083 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.080121 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.080131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.080166 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.080176 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.082376 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:27:46.045428572 +0000 UTC Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.122050 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.122057 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.122095 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:39 crc kubenswrapper[4872]: E0203 06:01:39.122510 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.122738 4872 scope.go:117] "RemoveContainer" containerID="47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348" Feb 03 06:01:39 crc kubenswrapper[4872]: E0203 06:01:39.122734 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:39 crc kubenswrapper[4872]: E0203 06:01:39.122799 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:39 crc kubenswrapper[4872]: E0203 06:01:39.122906 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.181649 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.181723 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.181739 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.181759 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.181776 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.283962 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.283984 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.283992 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.284002 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.284010 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.385780 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.385813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.385823 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.385837 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.385847 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.487842 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.487889 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.487901 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.487917 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.487928 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.591953 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.591985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.591994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.592006 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.592014 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.694125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.694175 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.694191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.694210 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.694229 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.795650 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.795677 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.795697 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.795709 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.795719 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.897108 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.897133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.897141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.897151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.897159 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.999788 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.999870 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:39 crc kubenswrapper[4872]: I0203 06:01:39.999878 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:39.999890 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:39.999899 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:39Z","lastTransitionTime":"2026-02-03T06:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.082994 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:46:06.365422187 +0000 UTC Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.101954 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.101982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.101990 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.102003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.102014 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.121771 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:40 crc kubenswrapper[4872]: E0203 06:01:40.121982 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.135771 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.152433 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.164989 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.179168 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.190552 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.199732 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.203552 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.203580 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.203591 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.203606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.203616 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.215599 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.229656 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.244419 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.259194 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.280260 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.305207 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.305240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.305248 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.305262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.305273 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.308716 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.319877 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.336931 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.347798 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.364420 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.377434 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:40Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.407398 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.407431 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.407439 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.407453 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.407462 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.509067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.509130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.509140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.509154 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.509163 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.615020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.615070 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.615083 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.615098 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.615108 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.716830 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.717050 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.717116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.717324 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.717561 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.819792 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.819827 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.819838 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.819853 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.819864 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.922912 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.922938 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.922950 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.922963 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:40 crc kubenswrapper[4872]: I0203 06:01:40.922973 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:40Z","lastTransitionTime":"2026-02-03T06:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.025380 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.025673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.025831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.025973 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.026091 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.083679 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:10:21.116622782 +0000 UTC Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.122055 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.122105 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.122131 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:41 crc kubenswrapper[4872]: E0203 06:01:41.122186 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:41 crc kubenswrapper[4872]: E0203 06:01:41.122311 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:41 crc kubenswrapper[4872]: E0203 06:01:41.122475 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.130193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.131347 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.132203 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.131553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.132847 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.133068 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.235793 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.235846 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.235855 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.235872 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.235901 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.339311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.339374 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.339386 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.339407 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.339419 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.442049 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.442087 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.442096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.442110 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.442122 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.544729 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.544769 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.544779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.544794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.544805 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.646863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.647111 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.647203 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.647311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.647412 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.750086 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.750136 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.750155 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.750180 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.750197 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.852090 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.852136 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.852153 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.852174 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.852190 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.955005 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.955042 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.955062 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.955082 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:41 crc kubenswrapper[4872]: I0203 06:01:41.955097 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:41Z","lastTransitionTime":"2026-02-03T06:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.057014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.057295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.057379 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.057464 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.057539 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.084190 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:30:48.16663882 +0000 UTC Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.122650 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:42 crc kubenswrapper[4872]: E0203 06:01:42.122892 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.130635 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:42 crc kubenswrapper[4872]: E0203 06:01:42.130812 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:42 crc kubenswrapper[4872]: E0203 06:01:42.130873 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs podName:a14ad474-acae-486b-bac9-e5e20cc8ec2e nodeName:}" failed. No retries permitted until 2026-02-03 06:02:14.130856588 +0000 UTC m=+104.713548022 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs") pod "network-metrics-daemon-drpfn" (UID: "a14ad474-acae-486b-bac9-e5e20cc8ec2e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.159099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.159139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.159151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.159190 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.159203 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.261421 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.261456 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.261467 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.261482 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.261493 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.363525 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.363753 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.363837 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.363956 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.364042 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.466751 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.466814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.466831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.466857 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.466879 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.569466 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.569526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.569548 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.569579 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.569597 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.672672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.673027 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.673123 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.673207 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.673300 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.776950 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.777001 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.777025 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.777048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.777066 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.879870 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.880092 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.880190 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.880273 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.880357 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.982343 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.982397 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.982421 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.982451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:42 crc kubenswrapper[4872]: I0203 06:01:42.982476 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:42Z","lastTransitionTime":"2026-02-03T06:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.084275 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:17:43.17428531 +0000 UTC Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.088615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.089060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.089103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.089159 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.089229 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.121716 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.121761 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:43 crc kubenswrapper[4872]: E0203 06:01:43.121877 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:43 crc kubenswrapper[4872]: E0203 06:01:43.121983 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.122051 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:43 crc kubenswrapper[4872]: E0203 06:01:43.122150 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.191667 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.191722 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.191731 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.191745 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.191754 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.294081 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.294112 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.294122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.294135 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.294144 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.395761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.395813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.395821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.395837 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.395863 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.499098 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.499125 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.499133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.499161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.499169 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.602873 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.602903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.602911 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.602924 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.602932 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.627549 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/0.log" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.627595 4872 generic.go:334] "Generic (PLEG): container finished" podID="db59aed5-04bc-4793-8938-196aace29feb" containerID="d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e" exitCode=1 Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.627649 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2f65" event={"ID":"db59aed5-04bc-4793-8938-196aace29feb","Type":"ContainerDied","Data":"d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.630180 4872 scope.go:117] "RemoveContainer" containerID="d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.648709 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.662222 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.681048 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.698814 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.705767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.705831 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.705853 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.705884 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.705902 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.711194 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.726020 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.742896 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.756654 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.769816 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.784052 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.799749 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.808818 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.808852 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.808863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.808878 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.808889 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.814227 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.829881 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"2026-02-03T06:00:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215\\\\n2026-02-03T06:00:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215 to /host/opt/cni/bin/\\\\n2026-02-03T06:00:58Z [verbose] multus-daemon started\\\\n2026-02-03T06:00:58Z [verbose] Readiness Indicator file check\\\\n2026-02-03T06:01:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.853986 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.868321 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9619f486-cce6-4470-a125-69dcf9a50c97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://972e64032788cda30595a6d5f4aab1d46317c29f198530b1c1d5aebb20fe2d87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.884856 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.900088 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.911398 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.911423 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.911432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.911447 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.911457 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:43Z","lastTransitionTime":"2026-02-03T06:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:43 crc kubenswrapper[4872]: I0203 06:01:43.917477 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:43Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.013026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.013047 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.013063 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.013074 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.013083 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.085011 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:41:27.143113763 +0000 UTC Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.115759 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.115809 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.115825 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.115847 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.115864 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.122214 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:44 crc kubenswrapper[4872]: E0203 06:01:44.122396 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.218219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.218245 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.218253 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.218263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.218271 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.320607 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.320635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.320646 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.320660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.320671 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.423146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.423193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.423214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.423236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.423253 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.526553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.526615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.526635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.526661 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.526723 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.628982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.629039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.629049 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.629064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.629073 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.632464 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/0.log" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.632534 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2f65" event={"ID":"db59aed5-04bc-4793-8938-196aace29feb","Type":"ContainerStarted","Data":"6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.650451 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.664957 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9619f486-cce6-4470-a125-69dcf9a50c97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://972e64032788cda30595a6d5f4aab1d46317c29f198530b1c1d5aebb20fe2d87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.681204 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.699363 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.712184 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.724216 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.732001 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.732096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.732109 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.732121 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.732129 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.738486 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"2026-02-03T06:00:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215\\\\n2026-02-03T06:00:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215 to /host/opt/cni/bin/\\\\n2026-02-03T06:00:58Z [verbose] multus-daemon started\\\\n2026-02-03T06:00:58Z [verbose] Readiness Indicator file check\\\\n2026-02-03T06:01:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.751798 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.768213 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.780064 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.791636 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.803360 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.815681 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.829986 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.835871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.835908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.835924 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.835945 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.835962 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.842072 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.857615 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.872520 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.885415 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:44Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.939406 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.939446 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.939461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.939484 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:44 crc kubenswrapper[4872]: I0203 06:01:44.939500 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:44Z","lastTransitionTime":"2026-02-03T06:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.041851 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.041898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.041907 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.041924 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.041934 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.085664 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 19:57:20.75606317 +0000 UTC Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.122054 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:45 crc kubenswrapper[4872]: E0203 06:01:45.122181 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.122384 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:45 crc kubenswrapper[4872]: E0203 06:01:45.122452 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.122605 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:45 crc kubenswrapper[4872]: E0203 06:01:45.122677 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.144347 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.144374 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.144382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.144394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.144403 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.246919 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.246964 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.246975 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.246993 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.247005 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.349779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.349824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.349835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.349850 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.349863 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.453151 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.453234 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.453251 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.453286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.453304 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.555461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.555511 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.555527 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.555547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.555558 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.658653 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.658768 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.658789 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.659172 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.659200 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.761619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.761671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.761705 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.761726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.761741 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.865302 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.865363 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.865382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.865411 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.865432 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.967882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.967945 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.967961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.967985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:45 crc kubenswrapper[4872]: I0203 06:01:45.968003 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:45Z","lastTransitionTime":"2026-02-03T06:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.071400 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.071528 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.071549 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.071573 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.071591 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.086211 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:01:02.62150021 +0000 UTC Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.122831 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:46 crc kubenswrapper[4872]: E0203 06:01:46.123066 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.174478 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.174535 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.174558 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.174590 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.174612 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.278281 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.278354 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.278377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.278407 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.278429 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.383402 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.383471 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.383490 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.383513 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.383530 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.486224 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.486295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.486318 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.486345 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.486367 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.589845 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.589914 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.589933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.589957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.589975 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.692845 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.692907 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.692925 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.692950 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.692992 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.799643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.799750 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.799772 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.799801 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.799822 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.903152 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.903219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.903236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.903261 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:46 crc kubenswrapper[4872]: I0203 06:01:46.903278 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:46Z","lastTransitionTime":"2026-02-03T06:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.006197 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.006255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.006271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.006295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.006313 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.086881 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:00:55.025629715 +0000 UTC Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.109361 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.109421 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.109444 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.109471 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.109491 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.121986 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.122019 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.122014 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.122168 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.122284 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.122423 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.212761 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.212799 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.212816 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.212839 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.212857 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.316109 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.316162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.316178 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.316202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.316219 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.418845 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.418892 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.418913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.418939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.418960 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.521750 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.521823 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.521840 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.521865 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.521882 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.624774 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.624932 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.624954 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.624996 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.625024 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.728105 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.728366 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.728528 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.728670 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.728909 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.832021 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.832103 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.832123 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.832150 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.832168 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.863911 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.864132 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.864298 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.864451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.864589 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.885429 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:47Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.890032 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.890082 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.890099 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.890123 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.890140 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.910776 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:47Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.915725 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.915908 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.916068 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.916218 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.916375 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.936105 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:47Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.941141 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.941335 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.941461 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.941587 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.941805 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.961168 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:47Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.965981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.966076 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.966096 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.966119 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.966177 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.986394 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:47Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:47 crc kubenswrapper[4872]: E0203 06:01:47.986645 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.988792 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.988958 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.989102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.989237 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:47 crc kubenswrapper[4872]: I0203 06:01:47.989359 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:47Z","lastTransitionTime":"2026-02-03T06:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.087040 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:49:56.001648695 +0000 UTC Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.092271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.092329 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.092347 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.092374 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.092392 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.121946 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:48 crc kubenswrapper[4872]: E0203 06:01:48.122184 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.195442 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.195767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.195958 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.196294 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.196604 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.300171 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.300232 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.300244 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.300263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.300277 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.402278 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.402314 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.402321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.402334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.402342 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.506395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.506747 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.506882 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.507003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.507140 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.610020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.610072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.610091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.610114 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.610134 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.713271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.713318 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.713333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.713351 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.713366 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.816303 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.816335 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.816343 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.816355 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.816387 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.919873 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.919938 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.919960 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.919985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:48 crc kubenswrapper[4872]: I0203 06:01:48.920005 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:48Z","lastTransitionTime":"2026-02-03T06:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.022703 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.022784 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.022797 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.022814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.022824 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.087177 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 05:27:51.455535536 +0000 UTC Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.121644 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.121766 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:49 crc kubenswrapper[4872]: E0203 06:01:49.121791 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.121941 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:49 crc kubenswrapper[4872]: E0203 06:01:49.122007 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:49 crc kubenswrapper[4872]: E0203 06:01:49.122234 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.126191 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.126239 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.126261 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.126288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.126311 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.229093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.229167 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.229184 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.229207 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.229226 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.331777 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.332116 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.332305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.332469 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.332642 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.435167 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.435551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.435756 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.435945 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.436104 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.539081 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.539404 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.539559 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.539797 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.539937 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.643236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.643335 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.643353 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.643377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.643393 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.746294 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.746345 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.746363 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.746387 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.746403 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.850240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.850310 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.850335 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.850367 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.850393 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.952413 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.952468 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.952485 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.952508 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:49 crc kubenswrapper[4872]: I0203 06:01:49.952527 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:49Z","lastTransitionTime":"2026-02-03T06:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.055317 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.055833 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.055991 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.056131 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.056292 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.088013 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:47:26.606677015 +0000 UTC Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.123121 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:50 crc kubenswrapper[4872]: E0203 06:01:50.123212 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.123972 4872 scope.go:117] "RemoveContainer" containerID="47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.144415 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.149605 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.158910 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.158957 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.158974 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.158999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.159017 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.171352 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.194243 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.224651 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.241899 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.259668 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.261326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.261382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.261393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.261413 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.261427 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.273527 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.291481 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.315698 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"2026-02-03T06:00:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215\\\\n2026-02-03T06:00:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215 to /host/opt/cni/bin/\\\\n2026-02-03T06:00:58Z [verbose] multus-daemon started\\\\n2026-02-03T06:00:58Z [verbose] Readiness Indicator file check\\\\n2026-02-03T06:01:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.353181 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.364018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.364039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.364046 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.364060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.364069 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.367620 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9619f486-cce6-4470-a125-69dcf9a50c97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://972e64032788cda30595a6d5f4aab1d46317c29f198530b1c1d5aebb20fe2d87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.380203 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.393539 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.408105 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.422182 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.433983 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.449843 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.464874 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.466114 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.466150 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.466161 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.466178 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.466190 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.568926 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.568971 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.568982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.568999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.569011 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.654838 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/2.log" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.656745 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.667012 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9619f486-cce6-4470-a125-69dcf9a50c97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://972e64032788cda30595a6d5f4aab1d46317c29f198530b1c1d5aebb20fe2d87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.670427 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.670455 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.670466 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.670482 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.670494 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.678628 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.690898 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.700978 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.713257 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.729466 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"2026-02-03T06:00:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215\\\\n2026-02-03T06:00:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215 to /host/opt/cni/bin/\\\\n2026-02-03T06:00:58Z [verbose] multus-daemon started\\\\n2026-02-03T06:00:58Z [verbose] Readiness Indicator file check\\\\n2026-02-03T06:01:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.746235 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.754963 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.766527 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.772309 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.772340 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.772349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.772363 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.772372 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.776413 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.786002 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.796946 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.805829 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.820217 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.830623 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.858144 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b126101f-96e6-4ae1-9153-af55d4a86409\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebac1c8b41abb507c1c46c2c41db89d11df7de6a69743e384c5b2f605a223b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e4558a6aad40f4edeecc1ee4cf08b24c8f36a10f1f719c0b751f703e20022c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647fd7b3133db8861de3ba448d648401cc3745d1d43c35809a5b33845077175c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a41df6e59e9659b116572130787f5ac782ba462c7a7335dc09de90fd76e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://361b3561576fcd438576ea10f1626ee87679dd60ad84ca855e148f79dd4a6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.870846 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.874394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.874426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.874435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.874449 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.874460 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.884941 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.901453 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:50Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.977053 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.977149 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.977168 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.977188 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:50 crc kubenswrapper[4872]: I0203 06:01:50.977202 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:50Z","lastTransitionTime":"2026-02-03T06:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.079789 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.079832 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.079843 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.079860 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.079872 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.089103 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:27:04.283860981 +0000 UTC Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.121963 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.122046 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:51 crc kubenswrapper[4872]: E0203 06:01:51.122135 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.122323 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:51 crc kubenswrapper[4872]: E0203 06:01:51.122379 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:51 crc kubenswrapper[4872]: E0203 06:01:51.122512 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.181960 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.182020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.182038 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.182061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.182080 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.287961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.288016 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.288035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.288059 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.288076 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.390559 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.390626 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.390642 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.390668 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.390714 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.493579 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.493635 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.493651 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.493675 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.493730 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.596394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.596446 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.596464 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.596487 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.596505 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.665155 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/3.log" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.666354 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/2.log" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.670511 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" exitCode=1 Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.670568 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.670616 4872 scope.go:117] "RemoveContainer" containerID="47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.671752 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:01:51 crc kubenswrapper[4872]: E0203 06:01:51.671995 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.694161 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"2026-02-03T06:00:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215\\\\n2026-02-03T06:00:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215 to /host/opt/cni/bin/\\\\n2026-02-03T06:00:58Z [verbose] multus-daemon started\\\\n2026-02-03T06:00:58Z [verbose] Readiness Indicator file check\\\\n2026-02-03T06:01:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.699593 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.699632 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.699648 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.699670 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.699728 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.727080 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:50Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 06:01:50.978848 6874 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0203 06:01:50.978895 6874 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0203 06:01:50.978929 6874 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0203 06:01:50.978996 6874 factory.go:1336] Added *v1.Node event handler 7\\\\nI0203 06:01:50.979070 6874 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0203 06:01:50.979381 6874 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 06:01:50.979500 6874 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 06:01:50.979559 6874 ovnkube.go:599] Stopped ovnkube\\\\nI0203 06:01:50.979603 6874 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 06:01:50.979712 6874 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.743610 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9619f486-cce6-4470-a125-69dcf9a50c97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://972e64032788cda30595a6d5f4aab1d46317c29f198530b1c1d5aebb20fe2d87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.764601 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.783722 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.802910 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.802961 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.802977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.803000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.803016 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.804343 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.854020 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.871994 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.890552 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.905914 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.905955 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.905968 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.905985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.906023 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:51Z","lastTransitionTime":"2026-02-03T06:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.913144 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.927096 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.940934 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.958884 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.980525 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:51 crc kubenswrapper[4872]: I0203 06:01:51.997337 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:51Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.008650 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.008730 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.008748 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.008774 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.008794 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.030449 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b126101f-96e6-4ae1-9153-af55d4a86409\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebac1c8b41abb507c1c46c2c41db89d11df7de6a69743e384c5b2f605a223b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e4558a6aad40f4edeecc1ee4cf08b24c8f36a10f1f719c0b751f703e20022c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647fd7b3133db8861de3ba448d648401cc3745d1d43c35809a5b33845077175c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a41df6e59e9659b116572130787f5ac782ba462c7a7335dc09de90fd76e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://361b3561576fcd438576ea10f1626ee87679dd60ad84ca855e148f79dd4a6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:52Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.049965 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:52Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.070378 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:52Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.089327 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:28:08.661717713 +0000 UTC Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.092951 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:52Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.112339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.112416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.112441 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.112475 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.112500 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.122836 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:52 crc kubenswrapper[4872]: E0203 06:01:52.123068 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.215272 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.215322 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.215339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.215365 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.215383 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.318502 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.318554 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.318571 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.318593 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.318610 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.421187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.421247 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.421270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.421297 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.421321 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.528567 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.528654 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.528675 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.528731 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.528760 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.632853 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.632948 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.632977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.633014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.633041 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.678495 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/3.log" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.735595 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.735659 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.735676 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.735732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.735750 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.838642 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.838751 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.838773 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.838797 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.838814 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.942008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.942077 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.942097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.942122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:52 crc kubenswrapper[4872]: I0203 06:01:52.942137 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:52Z","lastTransitionTime":"2026-02-03T06:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.044597 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.044679 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.044809 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.044835 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.044852 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.090051 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:57:57.991696735 +0000 UTC Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.122567 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.122765 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.123035 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.123126 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.123325 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.123419 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.146817 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.146863 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.146881 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.146904 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.146920 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.250100 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.250128 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.250139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.250156 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.250168 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.268773 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.268952 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.268928695 +0000 UTC m=+147.851620119 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.352675 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.352778 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.352800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.352832 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.352855 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.369901 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.369973 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.370033 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.370071 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370153 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370193 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370212 4872 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370219 4872 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370234 4872 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370293 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.370268313 +0000 UTC m=+147.952959787 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370348 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.370309194 +0000 UTC m=+147.953000658 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370386 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.370366796 +0000 UTC m=+147.953058330 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370473 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370507 4872 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370567 4872 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:53 crc kubenswrapper[4872]: E0203 06:01:53.370754 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.370662423 +0000 UTC m=+147.953353877 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.456061 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.456102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.456117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.456139 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.456157 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.559265 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.559393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.559463 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.559492 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.559551 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.662291 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.662751 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.662774 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.662846 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.662871 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.766305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.766396 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.766419 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.766475 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.766494 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.869642 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.869754 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.869783 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.869813 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.869831 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.973163 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.973259 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.973276 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.973299 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:53 crc kubenswrapper[4872]: I0203 06:01:53.973349 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:53Z","lastTransitionTime":"2026-02-03T06:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.077012 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.077080 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.077104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.077133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.077155 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.090731 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:41:29.900502193 +0000 UTC Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.122517 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:54 crc kubenswrapper[4872]: E0203 06:01:54.122661 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.179800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.180031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.180050 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.180071 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.180090 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.283196 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.283239 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.283255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.283277 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.283292 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.386039 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.386112 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.386133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.386157 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.386175 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.489193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.489263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.489285 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.489382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.489405 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.592892 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.593224 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.593363 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.593506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.593732 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.696733 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.696794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.696806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.696823 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.696858 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.800788 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.800856 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.800880 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.800911 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.800933 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.904134 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.904196 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.904214 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.904240 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:54 crc kubenswrapper[4872]: I0203 06:01:54.904257 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:54Z","lastTransitionTime":"2026-02-03T06:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.006438 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.006472 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.006483 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.006499 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.006510 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.091215 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:33:04.007357836 +0000 UTC Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.108940 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.108977 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.108988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.109003 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.109014 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.122574 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.122620 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.122610 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:55 crc kubenswrapper[4872]: E0203 06:01:55.122776 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:55 crc kubenswrapper[4872]: E0203 06:01:55.122832 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:55 crc kubenswrapper[4872]: E0203 06:01:55.122922 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.212928 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.212978 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.212987 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.212999 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.213007 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.315985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.316048 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.316067 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.316091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.316110 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.419241 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.419318 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.419339 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.419361 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.419380 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.521667 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.521794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.521820 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.521852 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.521875 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.625140 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.625179 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.625187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.625205 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.625214 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.728175 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.728217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.728233 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.728253 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.728270 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.834737 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.834981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.835049 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.835115 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.835174 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.938228 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.938283 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.938300 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.938323 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:55 crc kubenswrapper[4872]: I0203 06:01:55.938342 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:55Z","lastTransitionTime":"2026-02-03T06:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.041272 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.041333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.041350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.041373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.041391 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.091626 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:52:39.099274578 +0000 UTC Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.122825 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:56 crc kubenswrapper[4872]: E0203 06:01:56.123025 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.144879 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.144934 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.144950 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.144972 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.144990 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.247382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.247435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.247451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.247474 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.247492 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.350199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.350258 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.350268 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.350280 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.350288 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.452424 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.452500 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.452524 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.452551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.452573 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.555518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.555655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.555714 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.555740 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.555762 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.659010 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.659064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.659081 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.659105 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.659121 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.762546 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.762596 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.762613 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.762643 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.762665 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.866176 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.866250 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.866270 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.866295 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.866312 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.969902 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.969976 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.969993 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.970020 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:56 crc kubenswrapper[4872]: I0203 06:01:56.970038 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:56Z","lastTransitionTime":"2026-02-03T06:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.072896 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.072953 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.072969 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.072989 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.073003 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.092755 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:46:43.67813777 +0000 UTC Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.122448 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.122470 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:57 crc kubenswrapper[4872]: E0203 06:01:57.122670 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.122784 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:57 crc kubenswrapper[4872]: E0203 06:01:57.122828 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:57 crc kubenswrapper[4872]: E0203 06:01:57.122923 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.176236 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.176296 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.176320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.176349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.176424 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.279065 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.279106 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.279122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.279136 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.279146 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.382217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.382274 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.382292 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.382314 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.382332 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.484220 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.484281 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.484302 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.484330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.484350 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.588135 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.588208 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.588231 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.588261 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.588284 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.691182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.691239 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.691255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.691279 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.691298 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.800307 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.800389 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.800416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.800447 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.800478 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.903939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.904281 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.904422 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.904564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:57 crc kubenswrapper[4872]: I0203 06:01:57.904735 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:57Z","lastTransitionTime":"2026-02-03T06:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.007457 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.007531 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.007548 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.007568 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.007585 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.029122 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.029176 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.029193 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.029217 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.029238 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: E0203 06:01:58.050770 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.056066 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.056118 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.056130 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.056148 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.056160 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: E0203 06:01:58.075364 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.079345 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.079372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.079382 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.079400 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.079412 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.093859 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:50:38.210291803 +0000 UTC Feb 03 06:01:58 crc kubenswrapper[4872]: E0203 06:01:58.099145 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.103387 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.103423 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.103435 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.103451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.103463 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: E0203 06:01:58.123090 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.123565 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:01:58 crc kubenswrapper[4872]: E0203 06:01:58.123749 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.127933 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.127981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.128367 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.128716 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.128773 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: E0203 06:01:58.149096 4872 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4d942662-8847-4f84-a334-73ce9180bb14\\\",\\\"systemUUID\\\":\\\"e5eae85c-6acb-4fe9-bf1d-0d7dc555ae54\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:01:58Z is after 2025-08-24T17:21:41Z" Feb 03 06:01:58 crc kubenswrapper[4872]: E0203 06:01:58.149322 4872 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.151311 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.151372 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.151400 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.151432 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.151452 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.254458 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.254532 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.254557 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.254588 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.254611 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.358283 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.358349 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.358366 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.358392 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.358411 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.461953 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.462364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.462517 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.462672 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.462868 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.565428 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.565487 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.565498 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.565517 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.565528 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.668570 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.668610 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.668618 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.668634 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.668644 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.772000 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.772044 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.772060 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.772081 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.772099 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.875219 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.875369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.875398 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.875424 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.875446 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.978551 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.978623 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.978647 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.978677 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:58 crc kubenswrapper[4872]: I0203 06:01:58.978740 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:58Z","lastTransitionTime":"2026-02-03T06:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.081332 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.081380 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.081399 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.081424 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.081441 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.094396 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:36:02.465634919 +0000 UTC Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.121784 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:01:59 crc kubenswrapper[4872]: E0203 06:01:59.122002 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.122051 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.122123 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:01:59 crc kubenswrapper[4872]: E0203 06:01:59.122222 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:01:59 crc kubenswrapper[4872]: E0203 06:01:59.122356 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.184726 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.184798 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.184829 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.184861 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.184884 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.288416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.288477 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.288494 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.288519 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.288536 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.391304 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.391348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.391364 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.391387 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.391403 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.495556 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.495614 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.495633 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.495669 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.495717 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.605255 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.605306 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.605323 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.605347 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.605365 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.707093 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.707129 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.707168 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.707187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.707198 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.810334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.810377 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.810394 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.810415 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.810431 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.913108 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.913506 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.913523 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.913546 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:01:59 crc kubenswrapper[4872]: I0203 06:01:59.913563 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:01:59Z","lastTransitionTime":"2026-02-03T06:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.017097 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.017159 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.017177 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.017202 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.017219 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.094755 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:10:15.379674459 +0000 UTC Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.120397 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.120500 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.120521 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.120547 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.120570 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.121842 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:00 crc kubenswrapper[4872]: E0203 06:02:00.122080 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.170791 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.190595 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.211742 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.223071 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.223094 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.223102 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.223113 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.223121 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.223488 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.239509 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.249467 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.259828 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.271339 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.288425 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.306070 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b126101f-96e6-4ae1-9153-af55d4a86409\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebac1c8b41abb507c1c46c2c41db89d11df7de6a69743e384c5b2f605a223b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e4558a6aad40f4edeecc1ee4cf08b24c8f36a10f1f719c0b751f703e20022c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647fd7b3133db8861de3ba448d648401cc3745d1d43c35809a5b33845077175c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a41df6e59e9659b116572130787f5ac782ba462c7a7335dc09de90fd76e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://361b3561576fcd438576ea10f1626ee87679dd60ad84ca855e148f79dd4a6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.321048 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.326169 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.326279 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.326298 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.326320 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.326337 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.338469 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.355866 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.370415 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.390209 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"2026-02-03T06:00:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215\\\\n2026-02-03T06:00:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215 to /host/opt/cni/bin/\\\\n2026-02-03T06:00:58Z [verbose] multus-daemon started\\\\n2026-02-03T06:00:58Z [verbose] Readiness Indicator file check\\\\n2026-02-03T06:01:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.417647 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47a38f39c6b4f85ca46d5399b9be42e16a1b8b6c90505c547954b0f25a5e7348\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:26Z\\\",\\\"message\\\":\\\"cer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}\\\\nI0203 06:01:26.050414 6474 services_controller.go:360] Finished syncing service dns-default on namespace openshift-dns for network=default : 3.308699ms\\\\nI0203 06:01:26.050428 6474 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator-webhook for network=default\\\\nI0203 06:01:26.050396 6474 services_controller.go:452] Built service openshift-kube-scheduler-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0203 06:01:26.049304 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-drpfn\\\\nF0203 06:01:26.050187 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:50Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 06:01:50.978848 6874 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0203 06:01:50.978895 6874 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0203 06:01:50.978929 6874 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0203 06:01:50.978996 6874 factory.go:1336] Added *v1.Node event handler 7\\\\nI0203 06:01:50.979070 6874 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0203 06:01:50.979381 6874 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 06:01:50.979500 6874 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 06:01:50.979559 6874 ovnkube.go:599] Stopped ovnkube\\\\nI0203 06:01:50.979603 6874 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 06:01:50.979712 6874 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.428767 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.428824 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.428842 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.428868 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.428886 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.432237 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9619f486-cce6-4470-a125-69dcf9a50c97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://972e64032788cda30595a6d5f4aab1d46317c29f198530b1c1d5aebb20fe2d87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.447984 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.466809 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:00Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.531902 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.531952 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.531965 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.531982 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.531997 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.634893 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.634968 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.634988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.635018 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.635041 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.738624 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.738715 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.738732 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.738757 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.738776 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.841932 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.841994 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.842009 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.842034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.842051 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.944434 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.944674 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.944757 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.944840 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:00 crc kubenswrapper[4872]: I0203 06:02:00.944925 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:00Z","lastTransitionTime":"2026-02-03T06:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.047923 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.048162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.048234 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.048309 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.048381 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.094890 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:03:54.89396344 +0000 UTC Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.122308 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.122441 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.122358 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:01 crc kubenswrapper[4872]: E0203 06:02:01.122706 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:01 crc kubenswrapper[4872]: E0203 06:02:01.122882 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:01 crc kubenswrapper[4872]: E0203 06:02:01.122979 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.150749 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.150787 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.150799 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.150814 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.150826 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.253268 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.253308 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.253319 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.253334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.253347 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.356675 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.356779 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.356800 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.356828 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.356852 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.459554 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.459584 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.459593 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.459607 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.459619 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.562034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.562091 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.562110 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.562133 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.562150 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.664803 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.664862 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.664879 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.664903 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.664922 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.767821 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.767898 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.767917 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.767941 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.767961 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.871420 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.871497 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.871520 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.871554 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.871578 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.974606 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.974660 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.974678 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.974764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:01 crc kubenswrapper[4872]: I0203 06:02:01.974788 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:01Z","lastTransitionTime":"2026-02-03T06:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.076912 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.077028 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.077047 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.077072 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.077090 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.096360 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:57:25.519302849 +0000 UTC Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.121864 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:02 crc kubenswrapper[4872]: E0203 06:02:02.122230 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.179540 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.179578 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.179589 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.179603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.179614 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.282950 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.282988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.282998 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.283014 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.283022 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.385677 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.385744 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.385753 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.385770 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.385779 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.488224 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.488260 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.488271 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.488286 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.488297 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.591777 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.592117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.592321 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.592523 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.592747 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.695838 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.695877 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.695887 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.695901 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.695913 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.799237 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.799307 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.799326 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.799351 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.799369 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.902430 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.902502 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.902524 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.902553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:02 crc kubenswrapper[4872]: I0203 06:02:02.902574 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:02Z","lastTransitionTime":"2026-02-03T06:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.006105 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.006162 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.006182 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.006209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.006228 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.096814 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 11:52:23.360933312 +0000 UTC Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.109087 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.109143 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.109160 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.109187 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.109208 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.121577 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.121713 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:03 crc kubenswrapper[4872]: E0203 06:02:03.121773 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:03 crc kubenswrapper[4872]: E0203 06:02:03.121875 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.121978 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:03 crc kubenswrapper[4872]: E0203 06:02:03.122176 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.211974 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.212034 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.212055 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.212084 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.212104 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.315173 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.315239 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.315262 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.315296 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.315318 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.418146 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.418180 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.418209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.418227 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.418240 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.520525 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.520564 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.520574 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.520593 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.520605 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.622987 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.623047 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.623063 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.623089 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.623107 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.725595 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.725639 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.725655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.725677 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.725734 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.828808 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.828864 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.828881 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.828904 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.828920 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.931640 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.931712 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.931724 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.931742 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:03 crc kubenswrapper[4872]: I0203 06:02:03.931753 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:03Z","lastTransitionTime":"2026-02-03T06:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.033985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.034030 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.034043 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.034058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.034070 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.096932 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:13:52.373231838 +0000 UTC Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.122270 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:04 crc kubenswrapper[4872]: E0203 06:02:04.122465 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.136451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.136483 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.136492 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.136503 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.136512 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.239154 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.239199 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.239209 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.239242 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.239253 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.342452 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.342495 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.342507 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.342526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.342537 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.445973 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.446036 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.446107 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.446138 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.446225 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.549126 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.549189 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.549212 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.549242 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.549263 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.652221 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.652290 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.652317 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.652348 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.652369 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.754794 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.754869 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.754939 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.754968 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.755022 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.857393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.857436 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.857451 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.857472 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.857489 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.960533 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.960592 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.960609 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.960632 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:04 crc kubenswrapper[4872]: I0203 06:02:04.960649 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:04Z","lastTransitionTime":"2026-02-03T06:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.062579 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.062608 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.062615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.062627 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.062636 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.098092 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:58:18.56105264 +0000 UTC Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.122671 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.122675 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:05 crc kubenswrapper[4872]: E0203 06:02:05.122844 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.122909 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:05 crc kubenswrapper[4872]: E0203 06:02:05.123008 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:05 crc kubenswrapper[4872]: E0203 06:02:05.123150 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.124850 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:02:05 crc kubenswrapper[4872]: E0203 06:02:05.125185 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.140456 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c3fc0d-4542-4ed2-bc2d-49dc7130133a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ec7c8925a0c950cbf4c884e0c40f7b148a709cd84427ae52d13355d51c0b6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bea67e5f84be5bf3afa9ad79af965286d4eb570ad7db652e6b04bf7b5cb05e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e291263368a8127768144457fe182acf94e4f2731e55511fdeb0f2c7d2d73b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://606bc912a2c7a99b74ee5b45941be79dc1cc67d5673a65dec9e095c5c5170568\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.162845 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6759dc-5609-4243-988a-125ffde7ec9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0203 06:00:43.644934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 06:00:43.647523 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929021418/tls.crt::/tmp/serving-cert-3929021418/tls.key\\\\\\\"\\\\nI0203 06:00:49.275246 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0203 06:00:49.279569 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0203 06:00:49.279602 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0203 06:00:49.279649 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0203 06:00:49.279659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0203 06:00:49.293842 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0203 06:00:49.293882 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293891 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0203 06:00:49.293900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0203 06:00:49.293907 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0203 06:00:49.293913 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0203 06:00:49.293919 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0203 06:00:49.294315 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0203 06:00:49.298004 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.165206 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.165247 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.165263 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.165288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.165305 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.177069 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-898rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40f2e41-c3e3-4cfe-bf45-9d90649366d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e88a210fb2a5e3a4f87c93c39b4ea1be45efc7a607d30d149efa201afc15ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-898rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.192312 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pwxt9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e5db98db-c011-40be-b541-4a6552618133\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a233fde22388af3cfa69e041cae9563b1bead789211aeb1ba4ccba145d7c7f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tkh9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pwxt9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.208294 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a830583-53f4-48c2-9120-c57c2c4b81e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b6d57189945829f486095f43ab1fbda7a9192efe7cbdc51c11d26ee2ea3ce093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cde3d5d0a702fd4309c1af58d1c122e7fb7f5f64f710906aff3a1934a2e9a0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxgmc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nrgq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.222226 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drpfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a14ad474-acae-486b-bac9-e5e20cc8ec2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gxdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:01:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drpfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.238786 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8828f80b97229db60a843be4b27e6c5ef3af13441ce6fd736164df7360caa1ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.251980 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05d05db4-da7f-4f2f-9025-672aefab2d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca339b85ddb5d5dfd7fa9549d7b07457f14154fb75bd242f9ff15452cf003271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7w5qr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hgxbn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.268288 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.268334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.268350 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.268373 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.268390 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.272735 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b126101f-96e6-4ae1-9153-af55d4a86409\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ebac1c8b41abb507c1c46c2c41db89d11df7de6a69743e384c5b2f605a223b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e4558a6aad40f4edeecc1ee4cf08b24c8f36a10f1f719c0b751f703e20022c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647fd7b3133db8861de3ba448d648401cc3745d1d43c35809a5b33845077175c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a41df6e59e9659b116572130787f5ac782ba462c7a7335dc09de90fd76e7348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://361b3561576fcd438576ea10f1626ee87679dd60ad84ca855e148f79dd4a6a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d02ca20625d15570f29399dafe3b5803ab150ac0a04d233b420f64e4a8b356a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05736a0ed74fefa0a541dad3ca14e6ecd99728bf6a82b8a91e25d7c23b6d4253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69ff18768a85a06126e97ca6e16493b70d12c43de334acf77f9a41cc2071694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.290215 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.306424 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a8ed40524be5f119287e5f1692f899efa57cb7df8542a52d03f628e2f73b8c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://328e3b44f616996da2860b28fe09f7e9d62c55ed4bbf03f1e20da1b3815fe100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.323880 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e75c1ed-cb56-4fe0-a8db-fd1400cb935f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab72515029b32aa938f106d08416e35aa257deeffab9bce66225c5a506e98a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57e220be0d66592ba934393eb33f776616b5509aaacc19c62d7b5fd828921bc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15ed80a2e6ce37b02f83383278c5766d3e9c7f62b97b74ebfc287526f3299b47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f38db215507ab4c1fc344000bc38f3619eec887b559a9b2721e61f1e071ef30a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ba28821e32399fe0be75465b0ecdb0a21350b674d8d2430d830ea18a4641f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4d1030389fec56ff53744b69fa8647963d66172b47c653adaf33b979a22088\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6e7e1057d4a2d2d7c64746c1ef3c038c3945d13e7be5dbffe6664c0f6a0b8b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:01:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6w97s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.336961 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9619f486-cce6-4470-a125-69dcf9a50c97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://972e64032788cda30595a6d5f4aab1d46317c29f198530b1c1d5aebb20fe2d87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff450568e6d723595b97a4fe810d1cacf923d7ced45e8555133e1fc2fee6b68d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.353442 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e54dcdfc-342f-4d99-966c-c05430165c77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32bc3109d0a693e624cac5b1372fa7f7546bf44028d0325cfff7ec8d6cfe9729\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7ee5b640e87af99fe92674899e4d4ffd82097ec2195499d60bce027e429a0fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d4f6947657d000e37a3e38a2a4daeaabf412bf9bcdae7f949bb22944eaf0b85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.369761 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.371569 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.371603 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.371615 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.371630 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.371641 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.385761 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c3bfa1c035092d3b8cb3d8cfec0d428fdb5b3243afe33dedeb3d620da636c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.400987 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.420574 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2f65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db59aed5-04bc-4793-8938-196aace29feb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:43Z\\\",\\\"message\\\":\\\"2026-02-03T06:00:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215\\\\n2026-02-03T06:00:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0c31c132-9bc1-4e17-b55d-979207c35215 to /host/opt/cni/bin/\\\\n2026-02-03T06:00:58Z [verbose] multus-daemon started\\\\n2026-02-03T06:00:58Z [verbose] Readiness Indicator file check\\\\n2026-02-03T06:01:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2nd6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2f65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.451993 4872 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dafd73bb-7642-409c-9ea2-f6dbc002067f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:00:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T06:01:50Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0203 06:01:50.978848 6874 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0203 06:01:50.978895 6874 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0203 06:01:50.978929 6874 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0203 06:01:50.978996 6874 factory.go:1336] Added *v1.Node event handler 7\\\\nI0203 06:01:50.979070 6874 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0203 06:01:50.979381 6874 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0203 06:01:50.979500 6874 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0203 06:01:50.979559 6874 ovnkube.go:599] Stopped ovnkube\\\\nI0203 06:01:50.979603 6874 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 06:01:50.979712 6874 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T06:01:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T06:01:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T06:00:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T06:00:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9hh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T06:00:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jbpgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T06:02:05Z is after 2025-08-24T17:21:41Z" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.474026 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.474078 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.474095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.474117 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.474134 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.576554 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.576594 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.576605 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.576623 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.576636 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.680512 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.680557 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.680571 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.680592 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.680610 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.783336 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.783397 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.783416 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.783443 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.783461 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.886799 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.886853 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.886869 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.886892 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.886908 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.990143 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.990203 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.990222 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.990246 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:05 crc kubenswrapper[4872]: I0203 06:02:05.990262 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:05Z","lastTransitionTime":"2026-02-03T06:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.092852 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.092913 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.092929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.092951 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.092968 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.098890 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:45:17.551794737 +0000 UTC Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.122477 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:06 crc kubenswrapper[4872]: E0203 06:02:06.122747 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.195673 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.195764 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.195783 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.195806 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.195822 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.297849 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.297938 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.297959 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.297981 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.298033 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.400801 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.400854 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.400871 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.400894 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.400910 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.503949 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.504013 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.504038 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.504069 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.504091 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.607079 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.607143 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.607166 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.607194 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.607217 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.711305 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.711369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.711395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.711426 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.711450 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.814791 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.814861 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.814883 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.814912 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.814933 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.917064 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.917289 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.917395 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.917483 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:06 crc kubenswrapper[4872]: I0203 06:02:06.917574 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:06Z","lastTransitionTime":"2026-02-03T06:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.019771 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.019985 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.020109 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.020226 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.020312 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.099388 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:35:39.293671493 +0000 UTC Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.122198 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.122263 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.122374 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:07 crc kubenswrapper[4872]: E0203 06:02:07.122567 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.122997 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: E0203 06:02:07.123014 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.123035 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.123074 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.123095 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.123115 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: E0203 06:02:07.123172 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.226988 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.227058 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.227075 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.227104 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.227121 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.330480 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.330526 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.330536 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.330553 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.330566 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.432973 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.433016 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.433031 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.433053 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.433072 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.535619 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.535667 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.535712 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.535737 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.535755 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.639437 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.639480 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.639497 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.639518 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.639536 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.742301 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.742369 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.742393 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.742421 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.742442 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.845371 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.845413 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.845424 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.845441 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.845452 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.947398 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.947577 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.947678 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.947798 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:07 crc kubenswrapper[4872]: I0203 06:02:07.947876 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:07Z","lastTransitionTime":"2026-02-03T06:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.050653 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.051008 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.051147 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.051277 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.051400 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:08Z","lastTransitionTime":"2026-02-03T06:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.100379 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:43:29.914403786 +0000 UTC Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.121781 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:08 crc kubenswrapper[4872]: E0203 06:02:08.122110 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.157250 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.157312 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.157333 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.157448 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.157523 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:08Z","lastTransitionTime":"2026-02-03T06:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.261220 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.261334 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.261352 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.261375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.261392 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:08Z","lastTransitionTime":"2026-02-03T06:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.368247 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.368330 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.368353 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.368384 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.368416 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:08Z","lastTransitionTime":"2026-02-03T06:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.471605 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.471655 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.471671 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.471718 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.471736 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:08Z","lastTransitionTime":"2026-02-03T06:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.533280 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.533335 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.533352 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.533375 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.533393 4872 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T06:02:08Z","lastTransitionTime":"2026-02-03T06:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.612440 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj"] Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.613374 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.615523 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.615619 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.615750 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.617723 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.632764 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.632745154 podStartE2EDuration="51.632745154s" podCreationTimestamp="2026-02-03 06:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.631346171 +0000 UTC m=+99.214037615" watchObservedRunningTime="2026-02-03 06:02:08.632745154 +0000 UTC m=+99.215436588" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.649234 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.649210069 podStartE2EDuration="1m18.649210069s" podCreationTimestamp="2026-02-03 06:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.649210739 +0000 UTC m=+99.231902173" watchObservedRunningTime="2026-02-03 06:02:08.649210069 +0000 UTC m=+99.231901493" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.672377 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pwxt9" podStartSLOduration=73.672352503 podStartE2EDuration="1m13.672352503s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.672261791 +0000 UTC m=+99.254953235" watchObservedRunningTime="2026-02-03 06:02:08.672352503 +0000 UTC m=+99.255043927" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.672616 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-898rp" podStartSLOduration=73.6726099 podStartE2EDuration="1m13.6726099s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.660280154 +0000 UTC m=+99.242971608" watchObservedRunningTime="2026-02-03 06:02:08.6726099 +0000 UTC m=+99.255301324" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.687628 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nrgq4" podStartSLOduration=73.687609798 podStartE2EDuration="1m13.687609798s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.687118407 +0000 UTC m=+99.269809841" watchObservedRunningTime="2026-02-03 06:02:08.687609798 +0000 UTC m=+99.270301222" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.744547 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa3d79d-dcb5-400a-861f-1e58783e9a54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.744590 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aa3d79d-dcb5-400a-861f-1e58783e9a54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.744606 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aa3d79d-dcb5-400a-861f-1e58783e9a54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.744621 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa3d79d-dcb5-400a-861f-1e58783e9a54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.744642 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aa3d79d-dcb5-400a-861f-1e58783e9a54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.762205 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podStartSLOduration=73.762188606 podStartE2EDuration="1m13.762188606s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.739250726 +0000 UTC m=+99.321942140" watchObservedRunningTime="2026-02-03 06:02:08.762188606 +0000 UTC m=+99.344880030" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.762398 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.762394591 podStartE2EDuration="18.762394591s" podCreationTimestamp="2026-02-03 06:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.761728124 +0000 UTC m=+99.344419538" watchObservedRunningTime="2026-02-03 06:02:08.762394591 +0000 UTC m=+99.345086005" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.821291 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9pfgq" podStartSLOduration=73.821274271 podStartE2EDuration="1m13.821274271s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.808453674 +0000 UTC m=+99.391145088" watchObservedRunningTime="2026-02-03 06:02:08.821274271 +0000 UTC m=+99.403965685" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.845239 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa3d79d-dcb5-400a-861f-1e58783e9a54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.845517 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aa3d79d-dcb5-400a-861f-1e58783e9a54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.845668 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aa3d79d-dcb5-400a-861f-1e58783e9a54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.845784 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4aa3d79d-dcb5-400a-861f-1e58783e9a54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.845709 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4aa3d79d-dcb5-400a-861f-1e58783e9a54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.845810 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa3d79d-dcb5-400a-861f-1e58783e9a54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.846131 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aa3d79d-dcb5-400a-861f-1e58783e9a54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.847070 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa3d79d-dcb5-400a-861f-1e58783e9a54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.860589 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa3d79d-dcb5-400a-861f-1e58783e9a54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.866105 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.866084715 podStartE2EDuration="27.866084715s" podCreationTimestamp="2026-02-03 06:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.865677994 +0000 UTC m=+99.448369408" watchObservedRunningTime="2026-02-03 06:02:08.866084715 +0000 UTC m=+99.448776129" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.867414 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aa3d79d-dcb5-400a-861f-1e58783e9a54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dbwpj\" (UID: \"4aa3d79d-dcb5-400a-861f-1e58783e9a54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.903184 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.903163313 podStartE2EDuration="1m13.903163313s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.882740084 +0000 UTC m=+99.465431498" watchObservedRunningTime="2026-02-03 06:02:08.903163313 +0000 UTC m=+99.485854737" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.931413 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" Feb 03 06:02:08 crc kubenswrapper[4872]: I0203 06:02:08.946065 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g2f65" podStartSLOduration=73.946047641 podStartE2EDuration="1m13.946047641s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:08.943138321 +0000 UTC m=+99.525829745" watchObservedRunningTime="2026-02-03 06:02:08.946047641 +0000 UTC m=+99.528739065" Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.101857 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:47:20.856548014 +0000 UTC Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.101938 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.112651 4872 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.122810 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.122809 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.122892 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:09 crc kubenswrapper[4872]: E0203 06:02:09.123230 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:09 crc kubenswrapper[4872]: E0203 06:02:09.123354 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:09 crc kubenswrapper[4872]: E0203 06:02:09.123517 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.742076 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" event={"ID":"4aa3d79d-dcb5-400a-861f-1e58783e9a54","Type":"ContainerStarted","Data":"b9bacc9cf1bf4facdbc3ca9759d55e814f6f12957d6023457b0486836e278aa8"} Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.742560 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" event={"ID":"4aa3d79d-dcb5-400a-861f-1e58783e9a54","Type":"ContainerStarted","Data":"c64e7877e5d15bbcbbad66c089de7b3f248bb7c4368a8087c6d5d7be2834d4b8"} Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.762951 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dbwpj" podStartSLOduration=74.76290335 podStartE2EDuration="1m14.76290335s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:09.76288075 +0000 UTC m=+100.345572234" watchObservedRunningTime="2026-02-03 06:02:09.76290335 +0000 UTC m=+100.345594824" Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.835917 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:02:09 crc kubenswrapper[4872]: I0203 06:02:09.837717 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:02:09 crc kubenswrapper[4872]: E0203 06:02:09.838131 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:02:10 crc kubenswrapper[4872]: I0203 06:02:10.122398 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:10 crc kubenswrapper[4872]: E0203 06:02:10.124313 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:11 crc kubenswrapper[4872]: I0203 06:02:11.122156 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:11 crc kubenswrapper[4872]: E0203 06:02:11.122328 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:11 crc kubenswrapper[4872]: I0203 06:02:11.123309 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:11 crc kubenswrapper[4872]: E0203 06:02:11.123427 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:11 crc kubenswrapper[4872]: I0203 06:02:11.123497 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:11 crc kubenswrapper[4872]: E0203 06:02:11.123575 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:12 crc kubenswrapper[4872]: I0203 06:02:12.121863 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:12 crc kubenswrapper[4872]: E0203 06:02:12.122123 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:13 crc kubenswrapper[4872]: I0203 06:02:13.121813 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:13 crc kubenswrapper[4872]: I0203 06:02:13.121845 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:13 crc kubenswrapper[4872]: I0203 06:02:13.121840 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:13 crc kubenswrapper[4872]: E0203 06:02:13.122121 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:13 crc kubenswrapper[4872]: E0203 06:02:13.122307 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:13 crc kubenswrapper[4872]: E0203 06:02:13.122556 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:14 crc kubenswrapper[4872]: I0203 06:02:14.122198 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:14 crc kubenswrapper[4872]: E0203 06:02:14.122357 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:14 crc kubenswrapper[4872]: I0203 06:02:14.205785 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:14 crc kubenswrapper[4872]: E0203 06:02:14.206007 4872 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:02:14 crc kubenswrapper[4872]: E0203 06:02:14.206112 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs podName:a14ad474-acae-486b-bac9-e5e20cc8ec2e nodeName:}" failed. No retries permitted until 2026-02-03 06:03:18.206090002 +0000 UTC m=+168.788781456 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs") pod "network-metrics-daemon-drpfn" (UID: "a14ad474-acae-486b-bac9-e5e20cc8ec2e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 06:02:15 crc kubenswrapper[4872]: I0203 06:02:15.122329 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:15 crc kubenswrapper[4872]: I0203 06:02:15.122330 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:15 crc kubenswrapper[4872]: I0203 06:02:15.123433 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:15 crc kubenswrapper[4872]: E0203 06:02:15.123678 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:15 crc kubenswrapper[4872]: E0203 06:02:15.124098 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:15 crc kubenswrapper[4872]: E0203 06:02:15.124373 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:16 crc kubenswrapper[4872]: I0203 06:02:16.122762 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:16 crc kubenswrapper[4872]: E0203 06:02:16.122943 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:17 crc kubenswrapper[4872]: I0203 06:02:17.122682 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:17 crc kubenswrapper[4872]: I0203 06:02:17.122808 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:17 crc kubenswrapper[4872]: E0203 06:02:17.122906 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:17 crc kubenswrapper[4872]: I0203 06:02:17.123002 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:17 crc kubenswrapper[4872]: E0203 06:02:17.123133 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:17 crc kubenswrapper[4872]: E0203 06:02:17.123235 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:18 crc kubenswrapper[4872]: I0203 06:02:18.122462 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:18 crc kubenswrapper[4872]: E0203 06:02:18.122655 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:19 crc kubenswrapper[4872]: I0203 06:02:19.121809 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:19 crc kubenswrapper[4872]: I0203 06:02:19.121876 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:19 crc kubenswrapper[4872]: I0203 06:02:19.121818 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:19 crc kubenswrapper[4872]: E0203 06:02:19.121969 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:19 crc kubenswrapper[4872]: E0203 06:02:19.122131 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:19 crc kubenswrapper[4872]: E0203 06:02:19.122207 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:20 crc kubenswrapper[4872]: I0203 06:02:20.122576 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:20 crc kubenswrapper[4872]: E0203 06:02:20.124967 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:21 crc kubenswrapper[4872]: I0203 06:02:21.122357 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:21 crc kubenswrapper[4872]: I0203 06:02:21.122405 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:21 crc kubenswrapper[4872]: I0203 06:02:21.122492 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:21 crc kubenswrapper[4872]: E0203 06:02:21.122644 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:21 crc kubenswrapper[4872]: E0203 06:02:21.122755 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:21 crc kubenswrapper[4872]: E0203 06:02:21.122827 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:22 crc kubenswrapper[4872]: I0203 06:02:22.122350 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:22 crc kubenswrapper[4872]: E0203 06:02:22.123032 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:22 crc kubenswrapper[4872]: I0203 06:02:22.123493 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:02:22 crc kubenswrapper[4872]: E0203 06:02:22.123909 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jbpgt_openshift-ovn-kubernetes(dafd73bb-7642-409c-9ea2-f6dbc002067f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" Feb 03 06:02:23 crc kubenswrapper[4872]: I0203 06:02:23.122403 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:23 crc kubenswrapper[4872]: E0203 06:02:23.122682 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:23 crc kubenswrapper[4872]: I0203 06:02:23.122958 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:23 crc kubenswrapper[4872]: I0203 06:02:23.123030 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:23 crc kubenswrapper[4872]: E0203 06:02:23.123155 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:23 crc kubenswrapper[4872]: E0203 06:02:23.123363 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:24 crc kubenswrapper[4872]: I0203 06:02:24.122651 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:24 crc kubenswrapper[4872]: E0203 06:02:24.122945 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:25 crc kubenswrapper[4872]: I0203 06:02:25.122347 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:25 crc kubenswrapper[4872]: I0203 06:02:25.122466 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:25 crc kubenswrapper[4872]: E0203 06:02:25.122501 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:25 crc kubenswrapper[4872]: I0203 06:02:25.122347 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:25 crc kubenswrapper[4872]: E0203 06:02:25.122605 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:25 crc kubenswrapper[4872]: E0203 06:02:25.122737 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:26 crc kubenswrapper[4872]: I0203 06:02:26.122133 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:26 crc kubenswrapper[4872]: E0203 06:02:26.122369 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:27 crc kubenswrapper[4872]: I0203 06:02:27.122428 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:27 crc kubenswrapper[4872]: E0203 06:02:27.122580 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:27 crc kubenswrapper[4872]: I0203 06:02:27.122870 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:27 crc kubenswrapper[4872]: I0203 06:02:27.122884 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:27 crc kubenswrapper[4872]: E0203 06:02:27.122964 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:27 crc kubenswrapper[4872]: E0203 06:02:27.123069 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:28 crc kubenswrapper[4872]: I0203 06:02:28.122099 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:28 crc kubenswrapper[4872]: E0203 06:02:28.122469 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.121992 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.122069 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.122125 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:29 crc kubenswrapper[4872]: E0203 06:02:29.122287 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:29 crc kubenswrapper[4872]: E0203 06:02:29.122382 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:29 crc kubenswrapper[4872]: E0203 06:02:29.122518 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.813886 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/1.log" Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.814515 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/0.log" Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.814595 4872 generic.go:334] "Generic (PLEG): container finished" podID="db59aed5-04bc-4793-8938-196aace29feb" containerID="6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936" exitCode=1 Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.814646 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2f65" event={"ID":"db59aed5-04bc-4793-8938-196aace29feb","Type":"ContainerDied","Data":"6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936"} Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.814744 4872 scope.go:117] "RemoveContainer" containerID="d7f2079602f316c4e83523451b6543952e97e37e133ead7b25c01e6d698dff0e" Feb 03 06:02:29 crc kubenswrapper[4872]: I0203 06:02:29.815549 4872 scope.go:117] "RemoveContainer" containerID="6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936" Feb 03 06:02:29 crc kubenswrapper[4872]: E0203 06:02:29.816018 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-g2f65_openshift-multus(db59aed5-04bc-4793-8938-196aace29feb)\"" pod="openshift-multus/multus-g2f65" podUID="db59aed5-04bc-4793-8938-196aace29feb" Feb 03 06:02:30 crc kubenswrapper[4872]: E0203 06:02:30.085776 4872 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 03 06:02:30 crc kubenswrapper[4872]: I0203 06:02:30.121770 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:30 crc kubenswrapper[4872]: E0203 06:02:30.131803 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:30 crc kubenswrapper[4872]: E0203 06:02:30.232914 4872 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 06:02:30 crc kubenswrapper[4872]: I0203 06:02:30.822628 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/1.log" Feb 03 06:02:31 crc kubenswrapper[4872]: I0203 06:02:31.122260 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:31 crc kubenswrapper[4872]: I0203 06:02:31.122390 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:31 crc kubenswrapper[4872]: I0203 06:02:31.122512 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:31 crc kubenswrapper[4872]: E0203 06:02:31.123059 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:31 crc kubenswrapper[4872]: E0203 06:02:31.123399 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:31 crc kubenswrapper[4872]: E0203 06:02:31.123657 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:32 crc kubenswrapper[4872]: I0203 06:02:32.122032 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:32 crc kubenswrapper[4872]: E0203 06:02:32.122224 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:33 crc kubenswrapper[4872]: I0203 06:02:33.122026 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:33 crc kubenswrapper[4872]: I0203 06:02:33.122086 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:33 crc kubenswrapper[4872]: E0203 06:02:33.122235 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:33 crc kubenswrapper[4872]: E0203 06:02:33.122386 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:33 crc kubenswrapper[4872]: I0203 06:02:33.122057 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:33 crc kubenswrapper[4872]: E0203 06:02:33.123038 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:34 crc kubenswrapper[4872]: I0203 06:02:34.122836 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:34 crc kubenswrapper[4872]: E0203 06:02:34.123067 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:34 crc kubenswrapper[4872]: I0203 06:02:34.123365 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:02:34 crc kubenswrapper[4872]: I0203 06:02:34.840673 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/3.log" Feb 03 06:02:34 crc kubenswrapper[4872]: I0203 06:02:34.843128 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerStarted","Data":"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5"} Feb 03 06:02:34 crc kubenswrapper[4872]: I0203 06:02:34.843809 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:02:34 crc kubenswrapper[4872]: I0203 06:02:34.878404 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podStartSLOduration=99.8783832 podStartE2EDuration="1m39.8783832s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:34.877722895 +0000 UTC m=+125.460414339" watchObservedRunningTime="2026-02-03 06:02:34.8783832 +0000 UTC m=+125.461074634" Feb 03 06:02:35 crc kubenswrapper[4872]: I0203 06:02:35.113578 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-drpfn"] Feb 03 06:02:35 crc kubenswrapper[4872]: I0203 06:02:35.113703 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:35 crc kubenswrapper[4872]: E0203 06:02:35.113786 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:35 crc kubenswrapper[4872]: I0203 06:02:35.122723 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:35 crc kubenswrapper[4872]: E0203 06:02:35.122886 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:35 crc kubenswrapper[4872]: I0203 06:02:35.122736 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:35 crc kubenswrapper[4872]: E0203 06:02:35.122978 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:35 crc kubenswrapper[4872]: I0203 06:02:35.122732 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:35 crc kubenswrapper[4872]: E0203 06:02:35.123041 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:35 crc kubenswrapper[4872]: E0203 06:02:35.234976 4872 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 06:02:37 crc kubenswrapper[4872]: I0203 06:02:37.121865 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:37 crc kubenswrapper[4872]: I0203 06:02:37.121925 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:37 crc kubenswrapper[4872]: I0203 06:02:37.121999 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:37 crc kubenswrapper[4872]: E0203 06:02:37.122572 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:37 crc kubenswrapper[4872]: E0203 06:02:37.122349 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:37 crc kubenswrapper[4872]: I0203 06:02:37.121999 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:37 crc kubenswrapper[4872]: E0203 06:02:37.122725 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:37 crc kubenswrapper[4872]: E0203 06:02:37.122855 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:39 crc kubenswrapper[4872]: I0203 06:02:39.122202 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:39 crc kubenswrapper[4872]: I0203 06:02:39.122315 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:39 crc kubenswrapper[4872]: E0203 06:02:39.122379 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:39 crc kubenswrapper[4872]: I0203 06:02:39.122210 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:39 crc kubenswrapper[4872]: I0203 06:02:39.122234 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:39 crc kubenswrapper[4872]: E0203 06:02:39.122512 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:39 crc kubenswrapper[4872]: E0203 06:02:39.122645 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:39 crc kubenswrapper[4872]: E0203 06:02:39.122827 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:39 crc kubenswrapper[4872]: I0203 06:02:39.860125 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:02:40 crc kubenswrapper[4872]: E0203 06:02:40.235996 4872 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 06:02:41 crc kubenswrapper[4872]: I0203 06:02:41.122598 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:41 crc kubenswrapper[4872]: I0203 06:02:41.122635 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:41 crc kubenswrapper[4872]: I0203 06:02:41.122726 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:41 crc kubenswrapper[4872]: I0203 06:02:41.122805 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:41 crc kubenswrapper[4872]: E0203 06:02:41.122802 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:41 crc kubenswrapper[4872]: E0203 06:02:41.122936 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:41 crc kubenswrapper[4872]: E0203 06:02:41.122998 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:41 crc kubenswrapper[4872]: E0203 06:02:41.123183 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:43 crc kubenswrapper[4872]: I0203 06:02:43.122724 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:43 crc kubenswrapper[4872]: I0203 06:02:43.122757 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:43 crc kubenswrapper[4872]: I0203 06:02:43.122768 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:43 crc kubenswrapper[4872]: E0203 06:02:43.122908 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:43 crc kubenswrapper[4872]: E0203 06:02:43.123289 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:43 crc kubenswrapper[4872]: E0203 06:02:43.123394 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:43 crc kubenswrapper[4872]: I0203 06:02:43.123418 4872 scope.go:117] "RemoveContainer" containerID="6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936" Feb 03 06:02:43 crc kubenswrapper[4872]: I0203 06:02:43.123823 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:43 crc kubenswrapper[4872]: E0203 06:02:43.123899 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:43 crc kubenswrapper[4872]: I0203 06:02:43.879199 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/1.log" Feb 03 06:02:43 crc kubenswrapper[4872]: I0203 06:02:43.879853 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2f65" event={"ID":"db59aed5-04bc-4793-8938-196aace29feb","Type":"ContainerStarted","Data":"1648ba316b90acc20185dae52c795ff2915e2ad6ae7077f5dacd0dc1bdbd67db"} Feb 03 06:02:45 crc kubenswrapper[4872]: I0203 06:02:45.122380 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:45 crc kubenswrapper[4872]: I0203 06:02:45.122425 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:45 crc kubenswrapper[4872]: I0203 06:02:45.122380 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:45 crc kubenswrapper[4872]: E0203 06:02:45.122625 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 06:02:45 crc kubenswrapper[4872]: E0203 06:02:45.122826 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 06:02:45 crc kubenswrapper[4872]: E0203 06:02:45.123051 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 06:02:45 crc kubenswrapper[4872]: I0203 06:02:45.122411 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:45 crc kubenswrapper[4872]: E0203 06:02:45.123859 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drpfn" podUID="a14ad474-acae-486b-bac9-e5e20cc8ec2e" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.121972 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.122082 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.122182 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.122236 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.124545 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.125179 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.125307 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.127943 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.129365 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 06:02:47 crc kubenswrapper[4872]: I0203 06:02:47.129499 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.537929 4872 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.586848 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.587468 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.607510 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-f7wnn"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.608019 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f7wnn" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.608656 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.608953 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.610108 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rlbdg"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.610487 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.611546 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v5phf"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.612173 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.615245 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.615262 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.615668 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.615665 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.616285 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.616919 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-46ccx"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.617525 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.631951 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.635353 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.635648 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.641314 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.641533 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.643129 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.643216 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.643128 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.643473 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.644141 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.643852 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.652027 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.656232 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.656503 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qcvzs"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.656644 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.657678 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.663191 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.663311 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.663324 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.663494 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.674481 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.674597 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.674655 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.674749 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.674829 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.675046 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.675282 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.675593 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.675665 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.676922 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.677032 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.677853 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.677971 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.678106 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.678376 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.678448 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.678900 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.678984 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679165 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679232 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679237 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679252 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679255 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679023 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679301 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679638 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679333 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679341 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.679843 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fz7px"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.680364 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.681332 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.682152 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pqf2v"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.682454 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.682651 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bk69"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.682927 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.683212 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.683260 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.687931 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.688407 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.688507 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sz2ft"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.688860 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.690787 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.690987 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.691102 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.691200 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.693019 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.693663 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.697537 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.718002 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.705398 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7697\" (UniqueName: \"kubernetes.io/projected/0180e076-5e8c-4190-bd67-569e2f915913-kube-api-access-z7697\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.718656 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.720726 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.722437 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.722697 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730487 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.722824 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730632 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730857 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730986 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730862 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.731125 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730545 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730566 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730596 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.730505 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.731578 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.731737 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.731838 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.732177 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.732333 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.732612 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.732678 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.732715 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.722550 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.742136 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.742308 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.742385 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.742486 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744202 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744383 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744453 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744548 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744600 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.732854 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744826 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftkjz\" (UniqueName: \"kubernetes.io/projected/e0b641a4-8a66-4550-961f-c273bd9940e0-kube-api-access-ftkjz\") pod \"downloads-7954f5f757-f7wnn\" (UID: \"e0b641a4-8a66-4550-961f-c273bd9940e0\") " pod="openshift-console/downloads-7954f5f757-f7wnn" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744850 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-audit\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744868 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d546f30f-2c86-4925-8566-00e50b7875c7-audit-dir\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744892 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9ac5410-335f-4e33-88c7-4b7af39718ab-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2jhdv\" (UID: \"c9ac5410-335f-4e33-88c7-4b7af39718ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744907 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-serving-cert\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744925 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-serving-cert\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744945 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-image-import-ca\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744961 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6297d\" (UniqueName: \"kubernetes.io/projected/d546f30f-2c86-4925-8566-00e50b7875c7-kube-api-access-6297d\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744980 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-etcd-client\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.744995 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-console-config\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.746068 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.746188 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-policies\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.746766 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.746789 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7xf8z"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.747014 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.747235 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.747459 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwq84"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.746775 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.748194 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.748420 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.746955 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.748915 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.747568 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.749873 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-oauth-serving-cert\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.749900 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-oauth-config\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750008 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750067 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750210 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-audit-dir\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750310 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750339 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750365 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn48x\" (UniqueName: \"kubernetes.io/projected/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-kube-api-access-xn48x\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750386 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-encryption-config\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750404 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750428 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-trusted-ca-bundle\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750457 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750479 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-dir\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750499 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750513 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750565 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750585 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-config\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750611 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8x67\" (UniqueName: \"kubernetes.io/projected/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-kube-api-access-h8x67\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750631 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-audit-policies\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750649 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-service-ca\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750671 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750715 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750738 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-etcd-client\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750761 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-encryption-config\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750785 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750821 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750843 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d546f30f-2c86-4925-8566-00e50b7875c7-node-pullsecrets\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750860 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brp2l\" (UniqueName: \"kubernetes.io/projected/c9ac5410-335f-4e33-88c7-4b7af39718ab-kube-api-access-brp2l\") pod \"cluster-samples-operator-665b6dd947-2jhdv\" (UID: \"c9ac5410-335f-4e33-88c7-4b7af39718ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.750883 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-serving-cert\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.751019 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.751732 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.752026 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.752149 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.753261 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.756213 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.756549 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.757879 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.757920 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.758199 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.758424 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.758548 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.758596 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.761220 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.762037 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.763916 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4wqw4"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.764237 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.764467 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.764823 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.765068 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.765198 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.765297 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.765354 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.765425 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.765602 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.766128 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.766460 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7m4zk"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.770777 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v5phf"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.770804 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.770815 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f7wnn"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.770826 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pqf2v"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.770837 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rl5n4"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.771215 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rlbdg"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.771232 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qcvzs"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.771240 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.771250 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.771314 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.771516 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.771653 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.771799 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.772751 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.774413 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.776043 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fz7px"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.777180 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-46ccx"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.778529 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bk69"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.786943 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.796232 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.799040 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.809123 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.811118 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.811543 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.811882 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.818044 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8v95g"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.818608 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.818957 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.819489 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.820287 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xwb69"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.820803 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.824219 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-48f7z"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.824944 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.826252 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.832583 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.832848 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.833219 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.833321 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.833876 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kf6f5"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.834230 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.834627 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.834921 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.834987 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.835386 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.842399 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h5dtj"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.843396 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h5dtj" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.843745 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwq84"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.844757 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.847000 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mwrpx"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.850694 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s695g"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.851384 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rl5n4"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.851429 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.851464 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.851626 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852161 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852192 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-etcd-client\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852216 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac75736-e160-4979-b86d-1232fcbb9387-config\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852233 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69e367ca-9517-4daf-bae1-886b4f854b76-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852252 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852267 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-encryption-config\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852284 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e367ca-9517-4daf-bae1-886b4f854b76-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852346 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852377 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-client-ca\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852396 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4d0117-eecb-460f-997f-7fcb92cabdef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852410 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4d0117-eecb-460f-997f-7fcb92cabdef-config\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852431 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d546f30f-2c86-4925-8566-00e50b7875c7-node-pullsecrets\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852447 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brp2l\" (UniqueName: \"kubernetes.io/projected/c9ac5410-335f-4e33-88c7-4b7af39718ab-kube-api-access-brp2l\") pod \"cluster-samples-operator-665b6dd947-2jhdv\" (UID: \"c9ac5410-335f-4e33-88c7-4b7af39718ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852468 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-serving-cert\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852486 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7697\" (UniqueName: \"kubernetes.io/projected/0180e076-5e8c-4190-bd67-569e2f915913-kube-api-access-z7697\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852505 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852521 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69e367ca-9517-4daf-bae1-886b4f854b76-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852550 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852568 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftkjz\" (UniqueName: \"kubernetes.io/projected/e0b641a4-8a66-4550-961f-c273bd9940e0-kube-api-access-ftkjz\") pod \"downloads-7954f5f757-f7wnn\" (UID: \"e0b641a4-8a66-4550-961f-c273bd9940e0\") " pod="openshift-console/downloads-7954f5f757-f7wnn" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852588 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-audit\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852604 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d546f30f-2c86-4925-8566-00e50b7875c7-audit-dir\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852620 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f72f9246-edd6-47f2-8661-eacb3bdcb165-trusted-ca\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852645 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9ac5410-335f-4e33-88c7-4b7af39718ab-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2jhdv\" (UID: \"c9ac5410-335f-4e33-88c7-4b7af39718ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852663 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-serving-cert\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852679 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8ee41a-ef50-4114-b75a-75f65fe070c9-serving-cert\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852711 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852730 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc8ee41a-ef50-4114-b75a-75f65fe070c9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852750 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-image-import-ca\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852767 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6297d\" (UniqueName: \"kubernetes.io/projected/d546f30f-2c86-4925-8566-00e50b7875c7-kube-api-access-6297d\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852782 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-serving-cert\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852798 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npbxl\" (UniqueName: \"kubernetes.io/projected/bc8ee41a-ef50-4114-b75a-75f65fe070c9-kube-api-access-npbxl\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852819 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-etcd-client\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852836 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-console-config\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852860 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f72f9246-edd6-47f2-8661-eacb3bdcb165-config\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852862 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852876 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852905 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-policies\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852922 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-config\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.852942 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-oauth-serving-cert\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853125 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac75736-e160-4979-b86d-1232fcbb9387-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853145 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a611c711-1f25-4e6e-983c-17c001aaeabd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853159 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4d0117-eecb-460f-997f-7fcb92cabdef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853178 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853196 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853211 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-oauth-config\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853229 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-audit-dir\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853249 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853266 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853284 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn48x\" (UniqueName: \"kubernetes.io/projected/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-kube-api-access-xn48x\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853302 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-encryption-config\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853319 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a611c711-1f25-4e6e-983c-17c001aaeabd-images\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853339 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853357 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853374 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-trusted-ca-bundle\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853389 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtb8j\" (UniqueName: \"kubernetes.io/projected/cac75736-e160-4979-b86d-1232fcbb9387-kube-api-access-qtb8j\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853405 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4td8g\" (UniqueName: \"kubernetes.io/projected/a611c711-1f25-4e6e-983c-17c001aaeabd-kube-api-access-4td8g\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853423 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-dir\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853438 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922sg\" (UniqueName: \"kubernetes.io/projected/f72f9246-edd6-47f2-8661-eacb3bdcb165-kube-api-access-922sg\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853456 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-serving-cert\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853487 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853504 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853520 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mtdk\" (UniqueName: \"kubernetes.io/projected/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-kube-api-access-4mtdk\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853546 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853562 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-config\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853578 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwntn\" (UniqueName: \"kubernetes.io/projected/69e367ca-9517-4daf-bae1-886b4f854b76-kube-api-access-zwntn\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853598 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8x67\" (UniqueName: \"kubernetes.io/projected/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-kube-api-access-h8x67\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853612 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f72f9246-edd6-47f2-8661-eacb3bdcb165-serving-cert\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853629 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-audit-policies\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853644 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-service-ca\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853661 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853676 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a611c711-1f25-4e6e-983c-17c001aaeabd-config\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853709 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.853723 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.855701 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d546f30f-2c86-4925-8566-00e50b7875c7-audit-dir\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.858602 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.858811 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9ac5410-335f-4e33-88c7-4b7af39718ab-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2jhdv\" (UID: \"c9ac5410-335f-4e33-88c7-4b7af39718ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.859020 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-etcd-client\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.860339 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.860435 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d546f30f-2c86-4925-8566-00e50b7875c7-node-pullsecrets\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.861020 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-serving-cert\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.861027 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-console-config\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.862615 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-image-import-ca\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.863010 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4wqw4"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.863246 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.863171 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.863298 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.863299 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-audit-dir\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.863589 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-serving-cert\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.864068 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.864251 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-audit\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.865177 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-trusted-ca-bundle\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.865485 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-dir\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.865544 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xwb69"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.866124 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.866995 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.867389 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.867420 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-etcd-serving-ca\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.867828 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-policies\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.867978 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-config\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.868653 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-serving-cert\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.869705 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d546f30f-2c86-4925-8566-00e50b7875c7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.869925 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-audit-policies\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.870324 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-oauth-config\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.870571 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.870767 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7xf8z"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.871192 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-service-ca\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.872647 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.873700 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s695g"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.874673 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.876447 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sz2ft"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.877230 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-oauth-serving-cert\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.877270 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.877409 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.877862 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.880953 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.881054 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.881242 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-etcd-client\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.881921 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.884883 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.885632 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d546f30f-2c86-4925-8566-00e50b7875c7-encryption-config\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.889946 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.890312 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-48f7z"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.890428 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.892251 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.893233 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.898088 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-encryption-config\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.902747 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.904057 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.905544 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8v95g"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.906669 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h5dtj"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.907741 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.909582 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kf6f5"] Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.909508 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.929737 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.949033 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954311 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtb8j\" (UniqueName: \"kubernetes.io/projected/cac75736-e160-4979-b86d-1232fcbb9387-kube-api-access-qtb8j\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954341 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4td8g\" (UniqueName: \"kubernetes.io/projected/a611c711-1f25-4e6e-983c-17c001aaeabd-kube-api-access-4td8g\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954362 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-922sg\" (UniqueName: \"kubernetes.io/projected/f72f9246-edd6-47f2-8661-eacb3bdcb165-kube-api-access-922sg\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954379 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-serving-cert\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954396 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mtdk\" (UniqueName: \"kubernetes.io/projected/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-kube-api-access-4mtdk\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954414 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwntn\" (UniqueName: \"kubernetes.io/projected/69e367ca-9517-4daf-bae1-886b4f854b76-kube-api-access-zwntn\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954446 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f72f9246-edd6-47f2-8661-eacb3bdcb165-serving-cert\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954462 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954476 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a611c711-1f25-4e6e-983c-17c001aaeabd-config\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954492 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954518 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac75736-e160-4979-b86d-1232fcbb9387-config\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954535 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69e367ca-9517-4daf-bae1-886b4f854b76-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954554 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e367ca-9517-4daf-bae1-886b4f854b76-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954572 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-client-ca\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954601 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4d0117-eecb-460f-997f-7fcb92cabdef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954617 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4d0117-eecb-460f-997f-7fcb92cabdef-config\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954645 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69e367ca-9517-4daf-bae1-886b4f854b76-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954676 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f72f9246-edd6-47f2-8661-eacb3bdcb165-trusted-ca\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954724 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8ee41a-ef50-4114-b75a-75f65fe070c9-serving-cert\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954741 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954758 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc8ee41a-ef50-4114-b75a-75f65fe070c9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954781 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npbxl\" (UniqueName: \"kubernetes.io/projected/bc8ee41a-ef50-4114-b75a-75f65fe070c9-kube-api-access-npbxl\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954798 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f72f9246-edd6-47f2-8661-eacb3bdcb165-config\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954814 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954831 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-config\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954851 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac75736-e160-4979-b86d-1232fcbb9387-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954867 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a611c711-1f25-4e6e-983c-17c001aaeabd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954884 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4d0117-eecb-460f-997f-7fcb92cabdef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.954904 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a611c711-1f25-4e6e-983c-17c001aaeabd-images\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.955479 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a611c711-1f25-4e6e-983c-17c001aaeabd-config\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.955935 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a611c711-1f25-4e6e-983c-17c001aaeabd-images\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.956980 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.957604 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc8ee41a-ef50-4114-b75a-75f65fe070c9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.957807 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-config\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.958444 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.958772 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f72f9246-edd6-47f2-8661-eacb3bdcb165-config\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.958919 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-client-ca\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.959106 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8ee41a-ef50-4114-b75a-75f65fe070c9-serving-cert\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.959382 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f72f9246-edd6-47f2-8661-eacb3bdcb165-trusted-ca\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.959391 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4d0117-eecb-460f-997f-7fcb92cabdef-config\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.959663 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cac75736-e160-4979-b86d-1232fcbb9387-config\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.959721 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f72f9246-edd6-47f2-8661-eacb3bdcb165-serving-cert\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.960265 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a611c711-1f25-4e6e-983c-17c001aaeabd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.960617 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69e367ca-9517-4daf-bae1-886b4f854b76-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.960745 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4d0117-eecb-460f-997f-7fcb92cabdef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.962019 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.963016 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cac75736-e160-4979-b86d-1232fcbb9387-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.963364 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e367ca-9517-4daf-bae1-886b4f854b76-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.963781 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-serving-cert\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:49 crc kubenswrapper[4872]: I0203 06:02:49.969465 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.009304 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.028842 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.049548 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.070969 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.090445 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.111135 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.131158 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.150459 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.169759 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.189919 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.209909 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.230511 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.251222 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.270570 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.289915 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.310299 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.330349 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.350275 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.370855 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.390124 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.420621 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.430554 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.450424 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.470309 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.489952 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.509793 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.530676 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.551849 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.569935 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.590377 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.610304 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.630677 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.650953 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.671086 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.690486 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.711243 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.731063 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.750797 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.770115 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.788206 4872 request.go:700] Waited for 1.015789865s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0 Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.801277 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.810094 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.832399 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.850101 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.870476 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.890207 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.909786 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.929622 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969169 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksmk\" (UniqueName: \"kubernetes.io/projected/3aa95c63-1461-4ba3-87bf-948b24cb0e32-kube-api-access-9ksmk\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969236 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-trusted-ca\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969304 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef1cbf-eef5-48f8-b111-6b7244d686d4-serving-cert\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969350 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-certificates\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969417 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-tls\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969447 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3aa95c63-1461-4ba3-87bf-948b24cb0e32-machine-approver-tls\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969508 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969562 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-config\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969627 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7976c56b-b1e2-432b-9abb-d88e6483c5bc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969723 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa95c63-1461-4ba3-87bf-948b24cb0e32-config\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969810 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7976c56b-b1e2-432b-9abb-d88e6483c5bc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.969897 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b57c265-ae84-4b43-b90f-c734a20b43b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.970019 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-client-ca\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.970056 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b57c265-ae84-4b43-b90f-c734a20b43b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.970027 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.970401 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-bound-sa-token\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:50 crc kubenswrapper[4872]: E0203 06:02:50.970613 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.47059668 +0000 UTC m=+142.053288104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.971135 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrbk\" (UniqueName: \"kubernetes.io/projected/aeef1cbf-eef5-48f8-b111-6b7244d686d4-kube-api-access-smrbk\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.971378 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j229d\" (UniqueName: \"kubernetes.io/projected/9b57c265-ae84-4b43-b90f-c734a20b43b4-kube-api-access-j229d\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.971532 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.971709 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2edf464-7a14-4e97-87d9-1414f5b4818a-metrics-tls\") pod \"dns-operator-744455d44c-xwq84\" (UID: \"f2edf464-7a14-4e97-87d9-1414f5b4818a\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.971891 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c2d864-8067-41ec-88ce-b7d7b727d8f2-serving-cert\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.971921 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzgkx\" (UniqueName: \"kubernetes.io/projected/f2edf464-7a14-4e97-87d9-1414f5b4818a-kube-api-access-pzgkx\") pod \"dns-operator-744455d44c-xwq84\" (UID: \"f2edf464-7a14-4e97-87d9-1414f5b4818a\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.971951 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb767\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-kube-api-access-nb767\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.971973 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.972015 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtbg\" (UniqueName: \"kubernetes.io/projected/63c2d864-8067-41ec-88ce-b7d7b727d8f2-kube-api-access-jbtbg\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.972031 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3aa95c63-1461-4ba3-87bf-948b24cb0e32-auth-proxy-config\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.972070 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-config\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:50 crc kubenswrapper[4872]: I0203 06:02:50.989946 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.010177 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.029958 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.050111 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.069994 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.072950 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.073181 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.573111036 +0000 UTC m=+142.155802480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073358 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b57c265-ae84-4b43-b90f-c734a20b43b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073410 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rl5n4\" (UID: \"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073453 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7976c56b-b1e2-432b-9abb-d88e6483c5bc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073489 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0135232-d15a-483f-9c5d-0047001f554a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073522 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5601e187-f5b1-43fb-9022-a8287b8d5488-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073563 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-bound-sa-token\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073595 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-config\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073656 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/212974c1-d4ce-40be-bc86-10539c144803-config-volume\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073725 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/212974c1-d4ce-40be-bc86-10539c144803-metrics-tls\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073768 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f3285c9c-4471-4988-94e9-8ad1f65c7649-tmpfs\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.073965 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-stats-auth\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074011 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j229d\" (UniqueName: \"kubernetes.io/projected/9b57c265-ae84-4b43-b90f-c734a20b43b4-kube-api-access-j229d\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074042 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71180cf9-1370-4a11-9169-d7a121517ca4-srv-cert\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074075 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074106 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkqjz\" (UniqueName: \"kubernetes.io/projected/b0135232-d15a-483f-9c5d-0047001f554a-kube-api-access-jkqjz\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074171 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5601e187-f5b1-43fb-9022-a8287b8d5488-config\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074207 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw9sr\" (UniqueName: \"kubernetes.io/projected/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-kube-api-access-qw9sr\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074257 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txs7l\" (UniqueName: \"kubernetes.io/projected/f3285c9c-4471-4988-94e9-8ad1f65c7649-kube-api-access-txs7l\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074317 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074330 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7976c56b-b1e2-432b-9abb-d88e6483c5bc-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074346 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-plugins-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074377 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnnkz\" (UniqueName: \"kubernetes.io/projected/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-kube-api-access-vnnkz\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074431 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-client\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074462 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-socket-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074585 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzgkx\" (UniqueName: \"kubernetes.io/projected/f2edf464-7a14-4e97-87d9-1414f5b4818a-kube-api-access-pzgkx\") pod \"dns-operator-744455d44c-xwq84\" (UID: \"f2edf464-7a14-4e97-87d9-1414f5b4818a\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074652 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b69br\" (UniqueName: \"kubernetes.io/projected/a2e5c4f4-ca10-481c-803f-d49e6eb90295-kube-api-access-b69br\") pod \"migrator-59844c95c7-cljr9\" (UID: \"a2e5c4f4-ca10-481c-803f-d49e6eb90295\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074734 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f3285c9c-4471-4988-94e9-8ad1f65c7649-webhook-cert\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074876 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074927 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0135232-d15a-483f-9c5d-0047001f554a-metrics-tls\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.074966 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d5mg\" (UniqueName: \"kubernetes.io/projected/1d3b0b03-3b43-4694-8d79-7a615a4f3dbc-kube-api-access-5d5mg\") pod \"ingress-canary-h5dtj\" (UID: \"1d3b0b03-3b43-4694-8d79-7a615a4f3dbc\") " pod="openshift-ingress-canary/ingress-canary-h5dtj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.075103 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.075112 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb767\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-kube-api-access-nb767\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.075237 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3aa95c63-1461-4ba3-87bf-948b24cb0e32-auth-proxy-config\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.075305 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28287dc3-2b46-498f-9972-5a861374f4d5-secret-volume\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076304 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3aa95c63-1461-4ba3-87bf-948b24cb0e32-auth-proxy-config\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076384 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-httg4\" (UniqueName: \"kubernetes.io/projected/4955a26c-c5ff-42cb-a72f-b37e3bfded04-kube-api-access-httg4\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076419 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f3285c9c-4471-4988-94e9-8ad1f65c7649-apiservice-cert\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076455 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-config\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076486 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-serving-cert\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076529 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7rw\" (UniqueName: \"kubernetes.io/projected/169884b0-3648-4ee3-a113-2ad95475c8e4-kube-api-access-js7rw\") pod \"package-server-manager-789f6589d5-q9w6x\" (UID: \"169884b0-3648-4ee3-a113-2ad95475c8e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076581 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3892d90a-e1d2-42b3-82c4-efea782d8ba1-serving-cert\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076615 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef1cbf-eef5-48f8-b111-6b7244d686d4-serving-cert\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076649 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzmrn\" (UniqueName: \"kubernetes.io/projected/3892d90a-e1d2-42b3-82c4-efea782d8ba1-kube-api-access-lzmrn\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076681 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rj98\" (UniqueName: \"kubernetes.io/projected/2af55037-7ec8-4f97-96da-26ceb9c3a88f-kube-api-access-5rj98\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076777 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/169884b0-3648-4ee3-a113-2ad95475c8e4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q9w6x\" (UID: \"169884b0-3648-4ee3-a113-2ad95475c8e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076808 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d3b0b03-3b43-4694-8d79-7a615a4f3dbc-cert\") pod \"ingress-canary-h5dtj\" (UID: \"1d3b0b03-3b43-4694-8d79-7a615a4f3dbc\") " pod="openshift-ingress-canary/ingress-canary-h5dtj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076844 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-certificates\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.076878 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858rs\" (UniqueName: \"kubernetes.io/projected/3c1bfc9a-db7c-49a5-acd6-05ad2a616cae-kube-api-access-858rs\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvqv2\" (UID: \"3c1bfc9a-db7c-49a5-acd6-05ad2a616cae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.077479 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-tls\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.077963 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3aa95c63-1461-4ba3-87bf-948b24cb0e32-machine-approver-tls\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078009 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrgcq\" (UniqueName: \"kubernetes.io/projected/e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b-kube-api-access-jrgcq\") pod \"multus-admission-controller-857f4d67dd-rl5n4\" (UID: \"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078106 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-csi-data-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078154 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7976c56b-b1e2-432b-9abb-d88e6483c5bc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078190 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-signing-key\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078242 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa95c63-1461-4ba3-87bf-948b24cb0e32-config\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078297 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbkq\" (UniqueName: \"kubernetes.io/projected/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-kube-api-access-mhbkq\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078329 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-client-ca\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078363 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b57c265-ae84-4b43-b90f-c734a20b43b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078396 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-metrics-certs\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078430 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f57116dc-68fe-4b40-8b1e-71b7d85153d0-profile-collector-cert\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078462 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-ca\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078501 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89cmf\" (UniqueName: \"kubernetes.io/projected/56635a7e-1505-4486-978c-9aaf84e13929-kube-api-access-89cmf\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078570 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5601e187-f5b1-43fb-9022-a8287b8d5488-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078626 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd9jn\" (UniqueName: \"kubernetes.io/projected/0002faa0-2d17-4c16-8634-913b9012788e-kube-api-access-rd9jn\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.078981 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-config\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.080021 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b57c265-ae84-4b43-b90f-c734a20b43b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.080589 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa95c63-1461-4ba3-87bf-948b24cb0e32-config\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.081594 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b57c265-ae84-4b43-b90f-c734a20b43b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.081764 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-client-ca\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.081999 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smrbk\" (UniqueName: \"kubernetes.io/projected/aeef1cbf-eef5-48f8-b111-6b7244d686d4-kube-api-access-smrbk\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.082074 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0135232-d15a-483f-9c5d-0047001f554a-trusted-ca\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.082122 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-signing-cabundle\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.082975 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-config\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083020 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-service-ca\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083052 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56635a7e-1505-4486-978c-9aaf84e13929-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083052 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-certificates\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083109 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86sn\" (UniqueName: \"kubernetes.io/projected/36f98287-87b6-41a4-b88b-0dfe63b17838-kube-api-access-d86sn\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083198 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2edf464-7a14-4e97-87d9-1414f5b4818a-metrics-tls\") pod \"dns-operator-744455d44c-xwq84\" (UID: \"f2edf464-7a14-4e97-87d9-1414f5b4818a\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083234 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-mountpoint-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083267 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwf7d\" (UniqueName: \"kubernetes.io/projected/f57116dc-68fe-4b40-8b1e-71b7d85153d0-kube-api-access-qwf7d\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083302 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56635a7e-1505-4486-978c-9aaf84e13929-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083315 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3aa95c63-1461-4ba3-87bf-948b24cb0e32-machine-approver-tls\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083340 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c2d864-8067-41ec-88ce-b7d7b727d8f2-serving-cert\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083396 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083431 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083484 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbzwt\" (UniqueName: \"kubernetes.io/projected/212974c1-d4ce-40be-bc86-10539c144803-kube-api-access-nbzwt\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083537 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbtbg\" (UniqueName: \"kubernetes.io/projected/63c2d864-8067-41ec-88ce-b7d7b727d8f2-kube-api-access-jbtbg\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083571 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/36f98287-87b6-41a4-b88b-0dfe63b17838-node-bootstrap-token\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083619 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxf9c\" (UniqueName: \"kubernetes.io/projected/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-kube-api-access-bxf9c\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083660 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-registration-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083851 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-default-certificate\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083889 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-proxy-tls\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083961 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2af55037-7ec8-4f97-96da-26ceb9c3a88f-proxy-tls\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.083994 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2af55037-7ec8-4f97-96da-26ceb9c3a88f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084076 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ksmk\" (UniqueName: \"kubernetes.io/projected/3aa95c63-1461-4ba3-87bf-948b24cb0e32-kube-api-access-9ksmk\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084150 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4955a26c-c5ff-42cb-a72f-b37e3bfded04-service-ca-bundle\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084226 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-trusted-ca\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084259 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-images\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084356 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/36f98287-87b6-41a4-b88b-0dfe63b17838-certs\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084495 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef1cbf-eef5-48f8-b111-6b7244d686d4-serving-cert\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084558 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71180cf9-1370-4a11-9169-d7a121517ca4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084676 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f57116dc-68fe-4b40-8b1e-71b7d85153d0-srv-cert\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084791 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084826 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c1bfc9a-db7c-49a5-acd6-05ad2a616cae-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvqv2\" (UID: \"3c1bfc9a-db7c-49a5-acd6-05ad2a616cae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084868 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdzng\" (UniqueName: \"kubernetes.io/projected/28287dc3-2b46-498f-9972-5a861374f4d5-kube-api-access-sdzng\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.084980 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.085090 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-config\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.085163 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28287dc3-2b46-498f-9972-5a861374f4d5-config-volume\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.085238 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h78f\" (UniqueName: \"kubernetes.io/projected/71180cf9-1370-4a11-9169-d7a121517ca4-kube-api-access-2h78f\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.085904 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.585884912 +0000 UTC m=+142.168576356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.085970 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-trusted-ca\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.086357 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c2d864-8067-41ec-88ce-b7d7b727d8f2-config\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.088028 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-tls\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.088420 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7976c56b-b1e2-432b-9abb-d88e6483c5bc-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.089239 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f2edf464-7a14-4e97-87d9-1414f5b4818a-metrics-tls\") pod \"dns-operator-744455d44c-xwq84\" (UID: \"f2edf464-7a14-4e97-87d9-1414f5b4818a\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.089747 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63c2d864-8067-41ec-88ce-b7d7b727d8f2-serving-cert\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.092614 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.110975 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.130350 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.151293 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.172052 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.186552 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.186825 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.686774419 +0000 UTC m=+142.269465893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.186924 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rl5n4\" (UID: \"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187018 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0135232-d15a-483f-9c5d-0047001f554a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187073 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5601e187-f5b1-43fb-9022-a8287b8d5488-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187153 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-config\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187241 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f3285c9c-4471-4988-94e9-8ad1f65c7649-tmpfs\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187295 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/212974c1-d4ce-40be-bc86-10539c144803-config-volume\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187340 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/212974c1-d4ce-40be-bc86-10539c144803-metrics-tls\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187437 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71180cf9-1370-4a11-9169-d7a121517ca4-srv-cert\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187490 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-stats-auth\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187569 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkqjz\" (UniqueName: \"kubernetes.io/projected/b0135232-d15a-483f-9c5d-0047001f554a-kube-api-access-jkqjz\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187620 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5601e187-f5b1-43fb-9022-a8287b8d5488-config\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187671 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw9sr\" (UniqueName: \"kubernetes.io/projected/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-kube-api-access-qw9sr\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187767 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txs7l\" (UniqueName: \"kubernetes.io/projected/f3285c9c-4471-4988-94e9-8ad1f65c7649-kube-api-access-txs7l\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187863 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187911 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-plugins-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187964 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnnkz\" (UniqueName: \"kubernetes.io/projected/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-kube-api-access-vnnkz\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.187972 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f3285c9c-4471-4988-94e9-8ad1f65c7649-tmpfs\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188016 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-client\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188069 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-socket-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188143 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b69br\" (UniqueName: \"kubernetes.io/projected/a2e5c4f4-ca10-481c-803f-d49e6eb90295-kube-api-access-b69br\") pod \"migrator-59844c95c7-cljr9\" (UID: \"a2e5c4f4-ca10-481c-803f-d49e6eb90295\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188200 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188250 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f3285c9c-4471-4988-94e9-8ad1f65c7649-webhook-cert\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188312 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0135232-d15a-483f-9c5d-0047001f554a-metrics-tls\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188360 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d5mg\" (UniqueName: \"kubernetes.io/projected/1d3b0b03-3b43-4694-8d79-7a615a4f3dbc-kube-api-access-5d5mg\") pod \"ingress-canary-h5dtj\" (UID: \"1d3b0b03-3b43-4694-8d79-7a615a4f3dbc\") " pod="openshift-ingress-canary/ingress-canary-h5dtj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188434 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28287dc3-2b46-498f-9972-5a861374f4d5-secret-volume\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188481 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-httg4\" (UniqueName: \"kubernetes.io/projected/4955a26c-c5ff-42cb-a72f-b37e3bfded04-kube-api-access-httg4\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188522 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f3285c9c-4471-4988-94e9-8ad1f65c7649-apiservice-cert\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188571 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-serving-cert\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188645 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3892d90a-e1d2-42b3-82c4-efea782d8ba1-serving-cert\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188728 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js7rw\" (UniqueName: \"kubernetes.io/projected/169884b0-3648-4ee3-a113-2ad95475c8e4-kube-api-access-js7rw\") pod \"package-server-manager-789f6589d5-q9w6x\" (UID: \"169884b0-3648-4ee3-a113-2ad95475c8e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188788 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzmrn\" (UniqueName: \"kubernetes.io/projected/3892d90a-e1d2-42b3-82c4-efea782d8ba1-kube-api-access-lzmrn\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188836 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rj98\" (UniqueName: \"kubernetes.io/projected/2af55037-7ec8-4f97-96da-26ceb9c3a88f-kube-api-access-5rj98\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188937 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5601e187-f5b1-43fb-9022-a8287b8d5488-config\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.188965 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/169884b0-3648-4ee3-a113-2ad95475c8e4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q9w6x\" (UID: \"169884b0-3648-4ee3-a113-2ad95475c8e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189013 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d3b0b03-3b43-4694-8d79-7a615a4f3dbc-cert\") pod \"ingress-canary-h5dtj\" (UID: \"1d3b0b03-3b43-4694-8d79-7a615a4f3dbc\") " pod="openshift-ingress-canary/ingress-canary-h5dtj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189071 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858rs\" (UniqueName: \"kubernetes.io/projected/3c1bfc9a-db7c-49a5-acd6-05ad2a616cae-kube-api-access-858rs\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvqv2\" (UID: \"3c1bfc9a-db7c-49a5-acd6-05ad2a616cae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189137 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrgcq\" (UniqueName: \"kubernetes.io/projected/e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b-kube-api-access-jrgcq\") pod \"multus-admission-controller-857f4d67dd-rl5n4\" (UID: \"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189197 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-csi-data-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189251 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-signing-key\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189324 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbkq\" (UniqueName: \"kubernetes.io/projected/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-kube-api-access-mhbkq\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189374 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-ca\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189422 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89cmf\" (UniqueName: \"kubernetes.io/projected/56635a7e-1505-4486-978c-9aaf84e13929-kube-api-access-89cmf\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189468 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-metrics-certs\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189512 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f57116dc-68fe-4b40-8b1e-71b7d85153d0-profile-collector-cert\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189580 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5601e187-f5b1-43fb-9022-a8287b8d5488-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189651 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd9jn\" (UniqueName: \"kubernetes.io/projected/0002faa0-2d17-4c16-8634-913b9012788e-kube-api-access-rd9jn\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189751 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0135232-d15a-483f-9c5d-0047001f554a-trusted-ca\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189799 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-signing-cabundle\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189849 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/212974c1-d4ce-40be-bc86-10539c144803-config-volume\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.189916 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-config\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.191896 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-config\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.192586 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.192833 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-config\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.192925 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-plugins-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.194203 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rl5n4\" (UID: \"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.194907 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/212974c1-d4ce-40be-bc86-10539c144803-metrics-tls\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.195180 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-csi-data-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.195223 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-socket-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.195992 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.196934 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-ca\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.197775 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-client\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.198996 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-serving-cert\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.199076 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-service-ca\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.199115 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56635a7e-1505-4486-978c-9aaf84e13929-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.199909 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3892d90a-e1d2-42b3-82c4-efea782d8ba1-etcd-service-ca\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.201738 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28287dc3-2b46-498f-9972-5a861374f4d5-secret-volume\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.201756 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0135232-d15a-483f-9c5d-0047001f554a-metrics-tls\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.202479 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-stats-auth\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.203837 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f3285c9c-4471-4988-94e9-8ad1f65c7649-webhook-cert\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.204151 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f3285c9c-4471-4988-94e9-8ad1f65c7649-apiservice-cert\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.204640 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71180cf9-1370-4a11-9169-d7a121517ca4-srv-cert\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.204764 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86sn\" (UniqueName: \"kubernetes.io/projected/36f98287-87b6-41a4-b88b-0dfe63b17838-kube-api-access-d86sn\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.204883 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwf7d\" (UniqueName: \"kubernetes.io/projected/f57116dc-68fe-4b40-8b1e-71b7d85153d0-kube-api-access-qwf7d\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.204935 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-mountpoint-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.204975 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56635a7e-1505-4486-978c-9aaf84e13929-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205026 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205111 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/36f98287-87b6-41a4-b88b-0dfe63b17838-node-bootstrap-token\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205158 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbzwt\" (UniqueName: \"kubernetes.io/projected/212974c1-d4ce-40be-bc86-10539c144803-kube-api-access-nbzwt\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205195 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-registration-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205236 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxf9c\" (UniqueName: \"kubernetes.io/projected/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-kube-api-access-bxf9c\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205288 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-proxy-tls\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205331 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2af55037-7ec8-4f97-96da-26ceb9c3a88f-proxy-tls\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205375 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-default-certificate\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205428 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4955a26c-c5ff-42cb-a72f-b37e3bfded04-service-ca-bundle\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205471 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2af55037-7ec8-4f97-96da-26ceb9c3a88f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205517 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-images\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205561 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/36f98287-87b6-41a4-b88b-0dfe63b17838-certs\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205596 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71180cf9-1370-4a11-9169-d7a121517ca4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205641 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f57116dc-68fe-4b40-8b1e-71b7d85153d0-srv-cert\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205756 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c1bfc9a-db7c-49a5-acd6-05ad2a616cae-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvqv2\" (UID: \"3c1bfc9a-db7c-49a5-acd6-05ad2a616cae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205808 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdzng\" (UniqueName: \"kubernetes.io/projected/28287dc3-2b46-498f-9972-5a861374f4d5-kube-api-access-sdzng\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205890 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205929 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28287dc3-2b46-498f-9972-5a861374f4d5-config-volume\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.205973 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h78f\" (UniqueName: \"kubernetes.io/projected/71180cf9-1370-4a11-9169-d7a121517ca4-kube-api-access-2h78f\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.206551 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-mountpoint-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.207338 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0135232-d15a-483f-9c5d-0047001f554a-trusted-ca\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.207535 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56635a7e-1505-4486-978c-9aaf84e13929-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.208849 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3892d90a-e1d2-42b3-82c4-efea782d8ba1-serving-cert\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.209939 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4955a26c-c5ff-42cb-a72f-b37e3bfded04-service-ca-bundle\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.210387 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0002faa0-2d17-4c16-8634-913b9012788e-registration-dir\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.211293 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-images\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.211651 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2af55037-7ec8-4f97-96da-26ceb9c3a88f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.212034 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.213837 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-metrics-certs\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.215475 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.715373864 +0000 UTC m=+142.298065308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.222201 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2af55037-7ec8-4f97-96da-26ceb9c3a88f-proxy-tls\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.222287 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56635a7e-1505-4486-978c-9aaf84e13929-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.222931 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5601e187-f5b1-43fb-9022-a8287b8d5488-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.223123 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-proxy-tls\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.223389 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f57116dc-68fe-4b40-8b1e-71b7d85153d0-srv-cert\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.231298 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.231364 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4955a26c-c5ff-42cb-a72f-b37e3bfded04-default-certificate\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.236033 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/169884b0-3648-4ee3-a113-2ad95475c8e4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q9w6x\" (UID: \"169884b0-3648-4ee3-a113-2ad95475c8e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.236372 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f57116dc-68fe-4b40-8b1e-71b7d85153d0-profile-collector-cert\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.237254 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71180cf9-1370-4a11-9169-d7a121517ca4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.249768 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.261066 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-signing-key\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.270476 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.271672 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-signing-cabundle\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.289858 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.306677 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.306916 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.806891847 +0000 UTC m=+142.389583301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.307275 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.307794 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.807770358 +0000 UTC m=+142.390461802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.310061 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.329830 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.336747 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c1bfc9a-db7c-49a5-acd6-05ad2a616cae-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvqv2\" (UID: \"3c1bfc9a-db7c-49a5-acd6-05ad2a616cae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.349559 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.352382 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28287dc3-2b46-498f-9972-5a861374f4d5-config-volume\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.370049 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.390592 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.408560 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.408843 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.908808238 +0000 UTC m=+142.491499692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.409444 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.409659 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.410085 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:51.910064459 +0000 UTC m=+142.492755913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.430275 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.440949 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.450222 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.478811 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.485361 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.490434 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.510821 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.510877 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.511947 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.011923869 +0000 UTC m=+142.594615323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.530076 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.541557 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d3b0b03-3b43-4694-8d79-7a615a4f3dbc-cert\") pod \"ingress-canary-h5dtj\" (UID: \"1d3b0b03-3b43-4694-8d79-7a615a4f3dbc\") " pod="openshift-ingress-canary/ingress-canary-h5dtj" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.550674 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.570250 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.591015 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.610806 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.613646 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.614148 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.114121918 +0000 UTC m=+142.696813372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.620755 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/36f98287-87b6-41a4-b88b-0dfe63b17838-node-bootstrap-token\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.631119 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.636267 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/36f98287-87b6-41a4-b88b-0dfe63b17838-certs\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.650010 4872 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.669625 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.714526 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.715024 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.214992424 +0000 UTC m=+142.797683878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.718191 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8x67\" (UniqueName: \"kubernetes.io/projected/ee90c7f2-73c6-4d64-a164-39d0dbe68a21-kube-api-access-h8x67\") pod \"apiserver-7bbb656c7d-w5ttc\" (UID: \"ee90c7f2-73c6-4d64-a164-39d0dbe68a21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.719622 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.736834 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6297d\" (UniqueName: \"kubernetes.io/projected/d546f30f-2c86-4925-8566-00e50b7875c7-kube-api-access-6297d\") pod \"apiserver-76f77b778f-46ccx\" (UID: \"d546f30f-2c86-4925-8566-00e50b7875c7\") " pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.759155 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brp2l\" (UniqueName: \"kubernetes.io/projected/c9ac5410-335f-4e33-88c7-4b7af39718ab-kube-api-access-brp2l\") pod \"cluster-samples-operator-665b6dd947-2jhdv\" (UID: \"c9ac5410-335f-4e33-88c7-4b7af39718ab\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.778114 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7697\" (UniqueName: \"kubernetes.io/projected/0180e076-5e8c-4190-bd67-569e2f915913-kube-api-access-z7697\") pod \"console-f9d7485db-rlbdg\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.783995 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.800890 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftkjz\" (UniqueName: \"kubernetes.io/projected/e0b641a4-8a66-4550-961f-c273bd9940e0-kube-api-access-ftkjz\") pod \"downloads-7954f5f757-f7wnn\" (UID: \"e0b641a4-8a66-4550-961f-c273bd9940e0\") " pod="openshift-console/downloads-7954f5f757-f7wnn" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.808091 4872 request.go:700] Waited for 1.853494647s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.813646 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.818364 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.819006 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.318942444 +0000 UTC m=+142.901633898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.822151 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn48x\" (UniqueName: \"kubernetes.io/projected/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-kube-api-access-xn48x\") pod \"oauth-openshift-558db77b4-v5phf\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.832816 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4td8g\" (UniqueName: \"kubernetes.io/projected/a611c711-1f25-4e6e-983c-17c001aaeabd-kube-api-access-4td8g\") pod \"machine-api-operator-5694c8668f-qcvzs\" (UID: \"a611c711-1f25-4e6e-983c-17c001aaeabd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.853947 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.864826 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtb8j\" (UniqueName: \"kubernetes.io/projected/cac75736-e160-4979-b86d-1232fcbb9387-kube-api-access-qtb8j\") pod \"openshift-apiserver-operator-796bbdcf4f-crpxk\" (UID: \"cac75736-e160-4979-b86d-1232fcbb9387\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.917398 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-922sg\" (UniqueName: \"kubernetes.io/projected/f72f9246-edd6-47f2-8661-eacb3bdcb165-kube-api-access-922sg\") pod \"console-operator-58897d9998-sz2ft\" (UID: \"f72f9246-edd6-47f2-8661-eacb3bdcb165\") " pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.919244 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.919730 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwntn\" (UniqueName: \"kubernetes.io/projected/69e367ca-9517-4daf-bae1-886b4f854b76-kube-api-access-zwntn\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.919920 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.419900143 +0000 UTC m=+143.002591567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.920122 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:51 crc kubenswrapper[4872]: E0203 06:02:51.920604 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.42059406 +0000 UTC m=+143.003285484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.923675 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8492fe4a-5d99-4575-8d12-2ad50ecd6d07-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dstl8\" (UID: \"8492fe4a-5d99-4575-8d12-2ad50ecd6d07\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.929913 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mtdk\" (UniqueName: \"kubernetes.io/projected/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-kube-api-access-4mtdk\") pod \"controller-manager-879f6c89f-4bk69\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.947099 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4d0117-eecb-460f-997f-7fcb92cabdef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8f5j7\" (UID: \"7c4d0117-eecb-460f-997f-7fcb92cabdef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.956914 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.969049 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npbxl\" (UniqueName: \"kubernetes.io/projected/bc8ee41a-ef50-4114-b75a-75f65fe070c9-kube-api-access-npbxl\") pod \"openshift-config-operator-7777fb866f-fz7px\" (UID: \"bc8ee41a-ef50-4114-b75a-75f65fe070c9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:51 crc kubenswrapper[4872]: I0203 06:02:51.988413 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69e367ca-9517-4daf-bae1-886b4f854b76-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b7blm\" (UID: \"69e367ca-9517-4daf-bae1-886b4f854b76\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.017697 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.021495 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.021829 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.022192 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.522175603 +0000 UTC m=+143.104867007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.022282 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.022529 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.522522483 +0000 UTC m=+143.105213897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.028475 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-bound-sa-token\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.033072 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.033956 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f7wnn" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.040016 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.044252 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.049002 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j229d\" (UniqueName: \"kubernetes.io/projected/9b57c265-ae84-4b43-b90f-c734a20b43b4-kube-api-access-j229d\") pod \"openshift-controller-manager-operator-756b6f6bc6-ftfxg\" (UID: \"9b57c265-ae84-4b43-b90f-c734a20b43b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.062949 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.069784 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzgkx\" (UniqueName: \"kubernetes.io/projected/f2edf464-7a14-4e97-87d9-1414f5b4818a-kube-api-access-pzgkx\") pod \"dns-operator-744455d44c-xwq84\" (UID: \"f2edf464-7a14-4e97-87d9-1414f5b4818a\") " pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.084626 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb767\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-kube-api-access-nb767\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.119466 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smrbk\" (UniqueName: \"kubernetes.io/projected/aeef1cbf-eef5-48f8-b111-6b7244d686d4-kube-api-access-smrbk\") pod \"route-controller-manager-6576b87f9c-jq7th\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.119864 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.123079 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.123566 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.623550823 +0000 UTC m=+143.206242237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.140011 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbtbg\" (UniqueName: \"kubernetes.io/projected/63c2d864-8067-41ec-88ce-b7d7b727d8f2-kube-api-access-jbtbg\") pod \"authentication-operator-69f744f599-pqf2v\" (UID: \"63c2d864-8067-41ec-88ce-b7d7b727d8f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.152629 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ksmk\" (UniqueName: \"kubernetes.io/projected/3aa95c63-1461-4ba3-87bf-948b24cb0e32-kube-api-access-9ksmk\") pod \"machine-approver-56656f9798-gtcf2\" (UID: \"3aa95c63-1461-4ba3-87bf-948b24cb0e32\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.163088 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.166484 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.171240 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0135232-d15a-483f-9c5d-0047001f554a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.199344 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qcvzs"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.199931 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.204471 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkqjz\" (UniqueName: \"kubernetes.io/projected/b0135232-d15a-483f-9c5d-0047001f554a-kube-api-access-jkqjz\") pod \"ingress-operator-5b745b69d9-nj44h\" (UID: \"b0135232-d15a-483f-9c5d-0047001f554a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.210133 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.212177 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5601e187-f5b1-43fb-9022-a8287b8d5488-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vnkkx\" (UID: \"5601e187-f5b1-43fb-9022-a8287b8d5488\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.221787 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.228567 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.229280 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.729264795 +0000 UTC m=+143.311956199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.230277 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js7rw\" (UniqueName: \"kubernetes.io/projected/169884b0-3648-4ee3-a113-2ad95475c8e4-kube-api-access-js7rw\") pod \"package-server-manager-789f6589d5-q9w6x\" (UID: \"169884b0-3648-4ee3-a113-2ad95475c8e4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.262772 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.266601 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw9sr\" (UniqueName: \"kubernetes.io/projected/c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846-kube-api-access-qw9sr\") pod \"service-ca-operator-777779d784-8v95g\" (UID: \"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.285332 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txs7l\" (UniqueName: \"kubernetes.io/projected/f3285c9c-4471-4988-94e9-8ad1f65c7649-kube-api-access-txs7l\") pod \"packageserver-d55dfcdfc-ddbts\" (UID: \"f3285c9c-4471-4988-94e9-8ad1f65c7649\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.287825 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.296349 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d5mg\" (UniqueName: \"kubernetes.io/projected/1d3b0b03-3b43-4694-8d79-7a615a4f3dbc-kube-api-access-5d5mg\") pod \"ingress-canary-h5dtj\" (UID: \"1d3b0b03-3b43-4694-8d79-7a615a4f3dbc\") " pod="openshift-ingress-canary/ingress-canary-h5dtj" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.300200 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-46ccx"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.306529 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.307226 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzmrn\" (UniqueName: \"kubernetes.io/projected/3892d90a-e1d2-42b3-82c4-efea782d8ba1-kube-api-access-lzmrn\") pod \"etcd-operator-b45778765-4wqw4\" (UID: \"3892d90a-e1d2-42b3-82c4-efea782d8ba1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.325962 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnnkz\" (UniqueName: \"kubernetes.io/projected/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-kube-api-access-vnnkz\") pod \"marketplace-operator-79b997595-kf6f5\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.331797 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.332209 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.832193551 +0000 UTC m=+143.414884965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.339802 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.346391 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.355427 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rj98\" (UniqueName: \"kubernetes.io/projected/2af55037-7ec8-4f97-96da-26ceb9c3a88f-kube-api-access-5rj98\") pod \"machine-config-controller-84d6567774-hzwlw\" (UID: \"2af55037-7ec8-4f97-96da-26ceb9c3a88f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.358942 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.366502 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.370193 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89cmf\" (UniqueName: \"kubernetes.io/projected/56635a7e-1505-4486-978c-9aaf84e13929-kube-api-access-89cmf\") pod \"kube-storage-version-migrator-operator-b67b599dd-kcltv\" (UID: \"56635a7e-1505-4486-978c-9aaf84e13929\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.371930 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.396368 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-httg4\" (UniqueName: \"kubernetes.io/projected/4955a26c-c5ff-42cb-a72f-b37e3bfded04-kube-api-access-httg4\") pod \"router-default-5444994796-7m4zk\" (UID: \"4955a26c-c5ff-42cb-a72f-b37e3bfded04\") " pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.397056 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.413314 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.418134 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b69br\" (UniqueName: \"kubernetes.io/projected/a2e5c4f4-ca10-481c-803f-d49e6eb90295-kube-api-access-b69br\") pod \"migrator-59844c95c7-cljr9\" (UID: \"a2e5c4f4-ca10-481c-803f-d49e6eb90295\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.419564 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.428186 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbkq\" (UniqueName: \"kubernetes.io/projected/cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec-kube-api-access-mhbkq\") pod \"service-ca-9c57cc56f-48f7z\" (UID: \"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.432989 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.433284 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:52.933273083 +0000 UTC m=+143.515964497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.433575 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.450287 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.456465 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.458365 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858rs\" (UniqueName: \"kubernetes.io/projected/3c1bfc9a-db7c-49a5-acd6-05ad2a616cae-kube-api-access-858rs\") pod \"control-plane-machine-set-operator-78cbb6b69f-zvqv2\" (UID: \"3c1bfc9a-db7c-49a5-acd6-05ad2a616cae\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.471222 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.483815 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrgcq\" (UniqueName: \"kubernetes.io/projected/e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b-kube-api-access-jrgcq\") pod \"multus-admission-controller-857f4d67dd-rl5n4\" (UID: \"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.484903 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd9jn\" (UniqueName: \"kubernetes.io/projected/0002faa0-2d17-4c16-8634-913b9012788e-kube-api-access-rd9jn\") pod \"csi-hostpathplugin-s695g\" (UID: \"0002faa0-2d17-4c16-8634-913b9012788e\") " pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.497384 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.508754 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h78f\" (UniqueName: \"kubernetes.io/projected/71180cf9-1370-4a11-9169-d7a121517ca4-kube-api-access-2h78f\") pod \"olm-operator-6b444d44fb-rfkk2\" (UID: \"71180cf9-1370-4a11-9169-d7a121517ca4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.510822 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h5dtj" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.525372 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86sn\" (UniqueName: \"kubernetes.io/projected/36f98287-87b6-41a4-b88b-0dfe63b17838-kube-api-access-d86sn\") pod \"machine-config-server-mwrpx\" (UID: \"36f98287-87b6-41a4-b88b-0dfe63b17838\") " pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.534547 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.535013 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.0349934 +0000 UTC m=+143.617684814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.545519 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-s695g" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.548808 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwf7d\" (UniqueName: \"kubernetes.io/projected/f57116dc-68fe-4b40-8b1e-71b7d85153d0-kube-api-access-qwf7d\") pod \"catalog-operator-68c6474976-frjwx\" (UID: \"f57116dc-68fe-4b40-8b1e-71b7d85153d0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.588158 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.588774 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbzwt\" (UniqueName: \"kubernetes.io/projected/212974c1-d4ce-40be-bc86-10539c144803-kube-api-access-nbzwt\") pod \"dns-default-xwb69\" (UID: \"212974c1-d4ce-40be-bc86-10539c144803\") " pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.592425 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxf9c\" (UniqueName: \"kubernetes.io/projected/54e58fb8-e191-4fc5-ba7c-b7f401b15e00-kube-api-access-bxf9c\") pod \"machine-config-operator-74547568cd-nrtrj\" (UID: \"54e58fb8-e191-4fc5-ba7c-b7f401b15e00\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.610347 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdzng\" (UniqueName: \"kubernetes.io/projected/28287dc3-2b46-498f-9972-5a861374f4d5-kube-api-access-sdzng\") pod \"collect-profiles-29501640-r5tqt\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.635676 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.635988 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.13597721 +0000 UTC m=+143.718668624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.652955 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.681855 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.685847 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.700520 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.704605 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rlbdg"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.706861 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.712474 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f7wnn"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.713796 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.724893 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sz2ft"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.729219 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.737735 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.738154 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.238139057 +0000 UTC m=+143.820830471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.740182 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.743487 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bk69"] Feb 03 06:02:52 crc kubenswrapper[4872]: W0203 06:02:52.752961 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8492fe4a_5d99_4575_8d12_2ad50ecd6d07.slice/crio-fe90a8b2a650b1be9a7697bf93ddbbb781f85f36ab7f96c039cfeae52e5246c9 WatchSource:0}: Error finding container fe90a8b2a650b1be9a7697bf93ddbbb781f85f36ab7f96c039cfeae52e5246c9: Status 404 returned error can't find the container with id fe90a8b2a650b1be9a7697bf93ddbbb781f85f36ab7f96c039cfeae52e5246c9 Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.783555 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.797757 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fz7px"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.824738 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mwrpx" Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.838763 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.839118 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.339107016 +0000 UTC m=+143.921798430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.848854 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th"] Feb 03 06:02:52 crc kubenswrapper[4872]: W0203 06:02:52.853324 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69e367ca_9517_4daf_bae1_886b4f854b76.slice/crio-336c8cac3b18e79fe81d524ba77e935f54ce795daf203177f2babb634ebcf5c3 WatchSource:0}: Error finding container 336c8cac3b18e79fe81d524ba77e935f54ce795daf203177f2babb634ebcf5c3: Status 404 returned error can't find the container with id 336c8cac3b18e79fe81d524ba77e935f54ce795daf203177f2babb634ebcf5c3 Feb 03 06:02:52 crc kubenswrapper[4872]: W0203 06:02:52.874602 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0180e076_5e8c_4190_bd67_569e2f915913.slice/crio-b59f668d6b3e1ca3b5bf498ed33fac868ebd04e1d4770219c4bf2f33b5134be1 WatchSource:0}: Error finding container b59f668d6b3e1ca3b5bf498ed33fac868ebd04e1d4770219c4bf2f33b5134be1: Status 404 returned error can't find the container with id b59f668d6b3e1ca3b5bf498ed33fac868ebd04e1d4770219c4bf2f33b5134be1 Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.877038 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.878421 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v5phf"] Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.883978 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pqf2v"] Feb 03 06:02:52 crc kubenswrapper[4872]: W0203 06:02:52.919636 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc8ee41a_ef50_4114_b75a_75f65fe070c9.slice/crio-4371a01a90eeae0b4503cfcbffb4b7ac5e4782ec5b362d08b41e86ff4d7e0a44 WatchSource:0}: Error finding container 4371a01a90eeae0b4503cfcbffb4b7ac5e4782ec5b362d08b41e86ff4d7e0a44: Status 404 returned error can't find the container with id 4371a01a90eeae0b4503cfcbffb4b7ac5e4782ec5b362d08b41e86ff4d7e0a44 Feb 03 06:02:52 crc kubenswrapper[4872]: I0203 06:02:52.943013 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:52 crc kubenswrapper[4872]: E0203 06:02:52.943311 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.443294912 +0000 UTC m=+144.025986326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.044728 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.045134 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.545119392 +0000 UTC m=+144.127810806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.068025 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" event={"ID":"bc8ee41a-ef50-4114-b75a-75f65fe070c9","Type":"ContainerStarted","Data":"4371a01a90eeae0b4503cfcbffb4b7ac5e4782ec5b362d08b41e86ff4d7e0a44"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.086079 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-48f7z"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.100621 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" event={"ID":"8492fe4a-5d99-4575-8d12-2ad50ecd6d07","Type":"ContainerStarted","Data":"fe90a8b2a650b1be9a7697bf93ddbbb781f85f36ab7f96c039cfeae52e5246c9"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.101959 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" event={"ID":"f72f9246-edd6-47f2-8661-eacb3bdcb165","Type":"ContainerStarted","Data":"258abf66e87b35aed24eba48b2febc4dc5dad6a92c6a85955c5ab50f1cb282ea"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.102973 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rlbdg" event={"ID":"0180e076-5e8c-4190-bd67-569e2f915913","Type":"ContainerStarted","Data":"b59f668d6b3e1ca3b5bf498ed33fac868ebd04e1d4770219c4bf2f33b5134be1"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.120275 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" event={"ID":"69e367ca-9517-4daf-bae1-886b4f854b76","Type":"ContainerStarted","Data":"336c8cac3b18e79fe81d524ba77e935f54ce795daf203177f2babb634ebcf5c3"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.121557 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xwq84"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.122882 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4wqw4"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.131475 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" event={"ID":"7c4d0117-eecb-460f-997f-7fcb92cabdef","Type":"ContainerStarted","Data":"5493c8726bd49c1fc9a7d27b72c65dddc7b260c721688e21be96a569cd431e72"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.145603 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.146636 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.646605254 +0000 UTC m=+144.229296708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.150424 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7m4zk" event={"ID":"4955a26c-c5ff-42cb-a72f-b37e3bfded04","Type":"ContainerStarted","Data":"f40bcb29951e62763788a057ae24016dfd87ea1a426dbe969b2edf2f86fe9fc9"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.165031 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f7wnn" event={"ID":"e0b641a4-8a66-4550-961f-c273bd9940e0","Type":"ContainerStarted","Data":"4bf8ff32fba5f8d26047e38a7ca1525e16010c45890b7abd332f3d56e7d04637"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.173115 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" event={"ID":"7f8ddd98-412e-4a11-9cc2-07595b9cfdba","Type":"ContainerStarted","Data":"ccb4911ef3d18f52922dcd278f1f3cadaa60ee38a9bd9cfa2bb068b9c22c21ea"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.180500 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.181284 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" event={"ID":"cac75736-e160-4979-b86d-1232fcbb9387","Type":"ContainerStarted","Data":"e6c2f20841dd23ddb1cb23e555e4f9e282577f2b95a677e4cfb983389e1d28e1"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.248280 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.248448 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" event={"ID":"a611c711-1f25-4e6e-983c-17c001aaeabd","Type":"ContainerStarted","Data":"00b02cc2f86655d54eca211315a33d3b72324a7b773758102af97a1a61a4aaf8"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.248499 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" event={"ID":"a611c711-1f25-4e6e-983c-17c001aaeabd","Type":"ContainerStarted","Data":"a49c6deed137d424241a256ac15daa61e779a56eb3b174c2fc9796c4832352eb"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.253974 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.254238 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.754227741 +0000 UTC m=+144.336919155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.283267 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h5dtj"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.287818 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" event={"ID":"c9ac5410-335f-4e33-88c7-4b7af39718ab","Type":"ContainerStarted","Data":"135c8e1f07a59f4ea1de487466acac23cf776cce8b7bd66b532d6af59ddd53bb"} Feb 03 06:02:53 crc kubenswrapper[4872]: W0203 06:02:53.297155 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f98287_87b6_41a4_b88b_0dfe63b17838.slice/crio-ff9c6f68ba4eb3f38b7d0bb2e74e6f613d2bc38e626e5f8b4d3cd37e5559a117 WatchSource:0}: Error finding container ff9c6f68ba4eb3f38b7d0bb2e74e6f613d2bc38e626e5f8b4d3cd37e5559a117: Status 404 returned error can't find the container with id ff9c6f68ba4eb3f38b7d0bb2e74e6f613d2bc38e626e5f8b4d3cd37e5559a117 Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.331612 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" event={"ID":"d546f30f-2c86-4925-8566-00e50b7875c7","Type":"ContainerStarted","Data":"55ec49ee7537e71d9dbd2bc430d49774e7c0190e97427f7a832f30fbc5b8f0de"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.333217 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" event={"ID":"3aa95c63-1461-4ba3-87bf-948b24cb0e32","Type":"ContainerStarted","Data":"f6b42318b1ef54618a726477f3607b86554f7ffa498fecf5a465da8999cac079"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.344564 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" event={"ID":"aeef1cbf-eef5-48f8-b111-6b7244d686d4","Type":"ContainerStarted","Data":"d48315956ea3c2acf57f795434e4e453f1fcff7cd714c8fbfbe61f309acde265"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.346968 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" event={"ID":"ee90c7f2-73c6-4d64-a164-39d0dbe68a21","Type":"ContainerStarted","Data":"3beb6c8eac173b3a95d3be6258fe05f63245447e60385e3227f13eba93163f18"} Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.347296 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h"] Feb 03 06:02:53 crc kubenswrapper[4872]: W0203 06:02:53.348098 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2edf464_7a14_4e97_87d9_1414f5b4818a.slice/crio-1cd319a833815225cb272e2bad5600a6fb11b43a1b0a14a6e210e498a244e3e5 WatchSource:0}: Error finding container 1cd319a833815225cb272e2bad5600a6fb11b43a1b0a14a6e210e498a244e3e5: Status 404 returned error can't find the container with id 1cd319a833815225cb272e2bad5600a6fb11b43a1b0a14a6e210e498a244e3e5 Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.355829 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.355985 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.855965269 +0000 UTC m=+144.438656673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.356637 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.357043 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.857005174 +0000 UTC m=+144.439696588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: W0203 06:02:53.398508 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3b0b03_3b43_4694_8d79_7a615a4f3dbc.slice/crio-70425a811709d777b164fa389eb29793a86292c72d0797c9ef8c5ce3fcc105b2 WatchSource:0}: Error finding container 70425a811709d777b164fa389eb29793a86292c72d0797c9ef8c5ce3fcc105b2: Status 404 returned error can't find the container with id 70425a811709d777b164fa389eb29793a86292c72d0797c9ef8c5ce3fcc105b2 Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.422886 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8v95g"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.457662 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.458416 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:53.958401633 +0000 UTC m=+144.541093047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.561400 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.561704 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.061679198 +0000 UTC m=+144.644370612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.586580 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.627840 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kf6f5"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.667786 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.668046 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.168031066 +0000 UTC m=+144.750722480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.686864 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.689362 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s695g"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.770889 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.771132 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.271121106 +0000 UTC m=+144.853812520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.774067 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rl5n4"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.778827 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.796726 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.816305 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.871524 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.871622 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.371606932 +0000 UTC m=+144.954298346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.871841 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.872073 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.372066254 +0000 UTC m=+144.954757668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.933063 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.962769 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9"] Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.972774 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:53 crc kubenswrapper[4872]: E0203 06:02:53.973065 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.473050893 +0000 UTC m=+145.055742297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:53 crc kubenswrapper[4872]: I0203 06:02:53.985511 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx"] Feb 03 06:02:53 crc kubenswrapper[4872]: W0203 06:02:53.995148 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54e58fb8_e191_4fc5_ba7c_b7f401b15e00.slice/crio-c3025cb4219944cb627017ed7998f91831dc5efb64e02a7a5beb94bc9a99ac33 WatchSource:0}: Error finding container c3025cb4219944cb627017ed7998f91831dc5efb64e02a7a5beb94bc9a99ac33: Status 404 returned error can't find the container with id c3025cb4219944cb627017ed7998f91831dc5efb64e02a7a5beb94bc9a99ac33 Feb 03 06:02:53 crc kubenswrapper[4872]: W0203 06:02:53.997249 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod169884b0_3648_4ee3_a113_2ad95475c8e4.slice/crio-9d04abc8ad6308b2f2ae4cbcf32311a7c48825589f9936e9aaff5823c81f536f WatchSource:0}: Error finding container 9d04abc8ad6308b2f2ae4cbcf32311a7c48825589f9936e9aaff5823c81f536f: Status 404 returned error can't find the container with id 9d04abc8ad6308b2f2ae4cbcf32311a7c48825589f9936e9aaff5823c81f536f Feb 03 06:02:54 crc kubenswrapper[4872]: W0203 06:02:54.009943 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28287dc3_2b46_498f_9972_5a861374f4d5.slice/crio-a139bf8ee1f5d7c87bc467809f460abf23c45947c0db7371270446a5073630f4 WatchSource:0}: Error finding container a139bf8ee1f5d7c87bc467809f460abf23c45947c0db7371270446a5073630f4: Status 404 returned error can't find the container with id a139bf8ee1f5d7c87bc467809f460abf23c45947c0db7371270446a5073630f4 Feb 03 06:02:54 crc kubenswrapper[4872]: W0203 06:02:54.043639 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9f3f67f_ac6f_4037_949a_0ad5bc1dbd3b.slice/crio-aaf4f9fd66ef0596fafdc9209db32787a9ed7f10b47dd476552df24288b05c81 WatchSource:0}: Error finding container aaf4f9fd66ef0596fafdc9209db32787a9ed7f10b47dd476552df24288b05c81: Status 404 returned error can't find the container with id aaf4f9fd66ef0596fafdc9209db32787a9ed7f10b47dd476552df24288b05c81 Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.074649 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.075184 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.57517389 +0000 UTC m=+145.157865304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: W0203 06:02:54.075543 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71180cf9_1370_4a11_9169_d7a121517ca4.slice/crio-c4a1b9422f1de1d9763d3fcd72318e2012977c133437c5b5849dbd66af59cf92 WatchSource:0}: Error finding container c4a1b9422f1de1d9763d3fcd72318e2012977c133437c5b5849dbd66af59cf92: Status 404 returned error can't find the container with id c4a1b9422f1de1d9763d3fcd72318e2012977c133437c5b5849dbd66af59cf92 Feb 03 06:02:54 crc kubenswrapper[4872]: W0203 06:02:54.083361 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf57116dc_68fe_4b40_8b1e_71b7d85153d0.slice/crio-c0dbef5e6741e96347e9a8f614727770352a2bc304ed9e8b2171d0b9030dc98b WatchSource:0}: Error finding container c0dbef5e6741e96347e9a8f614727770352a2bc304ed9e8b2171d0b9030dc98b: Status 404 returned error can't find the container with id c0dbef5e6741e96347e9a8f614727770352a2bc304ed9e8b2171d0b9030dc98b Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.177245 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.177528 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.677514642 +0000 UTC m=+145.260206056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.247907 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xwb69"] Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.263789 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv"] Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.278586 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.278922 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.778907431 +0000 UTC m=+145.361598845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: W0203 06:02:54.324415 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212974c1_d4ce_40be_bc86_10539c144803.slice/crio-b5db88459bb9dfa234c6da3b7e9611a7d62446ddad25e2ea832d087c97c9d2a2 WatchSource:0}: Error finding container b5db88459bb9dfa234c6da3b7e9611a7d62446ddad25e2ea832d087c97c9d2a2: Status 404 returned error can't find the container with id b5db88459bb9dfa234c6da3b7e9611a7d62446ddad25e2ea832d087c97c9d2a2 Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.375513 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h5dtj" event={"ID":"1d3b0b03-3b43-4694-8d79-7a615a4f3dbc","Type":"ContainerStarted","Data":"70425a811709d777b164fa389eb29793a86292c72d0797c9ef8c5ce3fcc105b2"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.379072 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.379413 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.879396339 +0000 UTC m=+145.462087753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.383808 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" event={"ID":"f2edf464-7a14-4e97-87d9-1414f5b4818a","Type":"ContainerStarted","Data":"1cd319a833815225cb272e2bad5600a6fb11b43a1b0a14a6e210e498a244e3e5"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.386665 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" event={"ID":"54e58fb8-e191-4fc5-ba7c-b7f401b15e00","Type":"ContainerStarted","Data":"c3025cb4219944cb627017ed7998f91831dc5efb64e02a7a5beb94bc9a99ac33"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.388671 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" event={"ID":"3892d90a-e1d2-42b3-82c4-efea782d8ba1","Type":"ContainerStarted","Data":"2e428e167ff5a2e152451277493e02145b53fd3caaa0aa52375a2ae174d51730"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.409455 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" event={"ID":"3aa95c63-1461-4ba3-87bf-948b24cb0e32","Type":"ContainerStarted","Data":"93118810048a3f4009e917da6934d0c2c6577ce051b68a4670cb5575aa95462a"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.411385 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" event={"ID":"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed","Type":"ContainerStarted","Data":"c38a9a2b964f6ca9bf835292730c0facb4289bd03f7a3c14d4bfd0625e078dcc"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.412817 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" event={"ID":"f57116dc-68fe-4b40-8b1e-71b7d85153d0","Type":"ContainerStarted","Data":"c0dbef5e6741e96347e9a8f614727770352a2bc304ed9e8b2171d0b9030dc98b"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.422122 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xwb69" event={"ID":"212974c1-d4ce-40be-bc86-10539c144803","Type":"ContainerStarted","Data":"b5db88459bb9dfa234c6da3b7e9611a7d62446ddad25e2ea832d087c97c9d2a2"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.425887 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" event={"ID":"71180cf9-1370-4a11-9169-d7a121517ca4","Type":"ContainerStarted","Data":"c4a1b9422f1de1d9763d3fcd72318e2012977c133437c5b5849dbd66af59cf92"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.427048 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" event={"ID":"9b57c265-ae84-4b43-b90f-c734a20b43b4","Type":"ContainerStarted","Data":"b96479022ad929aa07b1e5f5ffc621c2934ef286272c0be7b5d8b4e5c6231712"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.429982 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" event={"ID":"c9ac5410-335f-4e33-88c7-4b7af39718ab","Type":"ContainerStarted","Data":"b796550d6873e2998fade739b7496c5e53acc93a160a09d1373c5f03d00986d7"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.430009 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" event={"ID":"c9ac5410-335f-4e33-88c7-4b7af39718ab","Type":"ContainerStarted","Data":"9d68f5e2f478316e7c3aab005b2a2211ca4a76ff457055b5489621d3ca449e56"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.438043 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mwrpx" event={"ID":"36f98287-87b6-41a4-b88b-0dfe63b17838","Type":"ContainerStarted","Data":"ff9c6f68ba4eb3f38b7d0bb2e74e6f613d2bc38e626e5f8b4d3cd37e5559a117"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.448177 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" event={"ID":"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec","Type":"ContainerStarted","Data":"3af030d6d95cc18729a3ac754e458ad52ac7dc365de49c1df8d8e2aee4fd8365"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.450738 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" event={"ID":"f3285c9c-4471-4988-94e9-8ad1f65c7649","Type":"ContainerStarted","Data":"c6ae305ebdc4ee86c78490671f31b3fea9c3307a20722f21fe5094fe0be49276"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.451825 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" event={"ID":"a2e5c4f4-ca10-481c-803f-d49e6eb90295","Type":"ContainerStarted","Data":"1cdc85637e1a54ab0056a11e5517711f8e131edf13b03bbf4002f5239dc6b425"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.452908 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" event={"ID":"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b","Type":"ContainerStarted","Data":"aaf4f9fd66ef0596fafdc9209db32787a9ed7f10b47dd476552df24288b05c81"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.453833 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" event={"ID":"28287dc3-2b46-498f-9972-5a861374f4d5","Type":"ContainerStarted","Data":"a139bf8ee1f5d7c87bc467809f460abf23c45947c0db7371270446a5073630f4"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.454950 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rlbdg" event={"ID":"0180e076-5e8c-4190-bd67-569e2f915913","Type":"ContainerStarted","Data":"fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.470435 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f7wnn" event={"ID":"e0b641a4-8a66-4550-961f-c273bd9940e0","Type":"ContainerStarted","Data":"97cfbab8d5f296852164475aa7e0cec80bcc5c59c8f9444122aa19c147b93a64"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.472530 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-f7wnn" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.477874 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7wnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.477924 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7wnn" podUID="e0b641a4-8a66-4550-961f-c273bd9940e0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.478320 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" event={"ID":"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846","Type":"ContainerStarted","Data":"a606f20b5b764f9d2f8676d6d09772bb7c201ae1fa904d54c17d92de965c7a10"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.480255 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.480851 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:54.980836778 +0000 UTC m=+145.563528192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.481942 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" event={"ID":"69e367ca-9517-4daf-bae1-886b4f854b76","Type":"ContainerStarted","Data":"fbe2d73a718e4ed520a8c9e6de3915dab71549f74ead8f2dbb968378b6a9b3f2"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.484619 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" event={"ID":"a611c711-1f25-4e6e-983c-17c001aaeabd","Type":"ContainerStarted","Data":"201be398756349246f596db74958c32ce80eba8c0186cae29e80c7c6645a45a2"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.486407 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" event={"ID":"5601e187-f5b1-43fb-9022-a8287b8d5488","Type":"ContainerStarted","Data":"a0d25b990f949081af977f2583209f9a88dc7e30e7741c91f6748e5e0e784064"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.487703 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" event={"ID":"b0135232-d15a-483f-9c5d-0047001f554a","Type":"ContainerStarted","Data":"b21680fd5720865719a15f7589dfd9b734535e4bea1bfc073c93b27a367759c7"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.494828 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" event={"ID":"cac75736-e160-4979-b86d-1232fcbb9387","Type":"ContainerStarted","Data":"b9fb880bd83455d10d166fe604ec9c3249ba5418eba4a324be811475dd1a1e3e"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.504949 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" event={"ID":"3c1bfc9a-db7c-49a5-acd6-05ad2a616cae","Type":"ContainerStarted","Data":"e07dafe04c11362af78c256a05e374a1d66ccdc87390d841ff8ed398540a6140"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.512162 4872 generic.go:334] "Generic (PLEG): container finished" podID="d546f30f-2c86-4925-8566-00e50b7875c7" containerID="d1f81d13265cba21fe40b9e385ad47b95bd5f95c59b06b00ba3dc7e45dcb704a" exitCode=0 Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.512203 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" event={"ID":"d546f30f-2c86-4925-8566-00e50b7875c7","Type":"ContainerDied","Data":"d1f81d13265cba21fe40b9e385ad47b95bd5f95c59b06b00ba3dc7e45dcb704a"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.514751 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" event={"ID":"f72f9246-edd6-47f2-8661-eacb3bdcb165","Type":"ContainerStarted","Data":"3d70d52752ffc908946da1133bc381cd6236ed1ef908013895e4dbce0630dba0"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.514910 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.516226 4872 patch_prober.go:28] interesting pod/console-operator-58897d9998-sz2ft container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.516257 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" podUID="f72f9246-edd6-47f2-8661-eacb3bdcb165" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.516579 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" event={"ID":"2af55037-7ec8-4f97-96da-26ceb9c3a88f","Type":"ContainerStarted","Data":"b24d5223f726aeabe7d7ad42c74551a599de0c029c8a67770727ad454eb3fd6d"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.516607 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" event={"ID":"2af55037-7ec8-4f97-96da-26ceb9c3a88f","Type":"ContainerStarted","Data":"b9dc808ceeaf64d7f30c0c7157da2acc7e9828f442b673a90a3b0d2c4b1c347a"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.520810 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" event={"ID":"63c2d864-8067-41ec-88ce-b7d7b727d8f2","Type":"ContainerStarted","Data":"19a475018fb6854275bd196efacd48040add1329b2063e971a216b5555436f69"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.520835 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" event={"ID":"63c2d864-8067-41ec-88ce-b7d7b727d8f2","Type":"ContainerStarted","Data":"8404bf907020a7c0dbe88bcd3ff6d38c0574d88545ccd82786d7e6eccbe2fd04"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.522348 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" event={"ID":"7c4d0117-eecb-460f-997f-7fcb92cabdef","Type":"ContainerStarted","Data":"fb0b63b4b3fcd670ba7fc8cd94b9f2bf6ca13e7bd2127941e34cc68504240dfd"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.523743 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s695g" event={"ID":"0002faa0-2d17-4c16-8634-913b9012788e","Type":"ContainerStarted","Data":"5065c1e29cc5941522a84b82f72c1dbe1af9ac748d8c1d3b1e0a049cda18809a"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.529407 4872 generic.go:334] "Generic (PLEG): container finished" podID="ee90c7f2-73c6-4d64-a164-39d0dbe68a21" containerID="f731c66486acc879cd1d3688d9a35a9c9b49b2a389d50fb746c880d4d7114002" exitCode=0 Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.529449 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" event={"ID":"ee90c7f2-73c6-4d64-a164-39d0dbe68a21","Type":"ContainerDied","Data":"f731c66486acc879cd1d3688d9a35a9c9b49b2a389d50fb746c880d4d7114002"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.530950 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" event={"ID":"169884b0-3648-4ee3-a113-2ad95475c8e4","Type":"ContainerStarted","Data":"9d04abc8ad6308b2f2ae4cbcf32311a7c48825589f9936e9aaff5823c81f536f"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.532771 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" event={"ID":"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538","Type":"ContainerStarted","Data":"566cda45bcf5089d99474ad2f61c6f76b164521b7a53262152559dec2e248544"} Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.583268 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.583922 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.083883748 +0000 UTC m=+145.666575162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.584407 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.586620 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.086604883 +0000 UTC m=+145.669296387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.685024 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.685325 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.185310048 +0000 UTC m=+145.768001462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.703948 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-qcvzs" podStartSLOduration=119.703927684 podStartE2EDuration="1m59.703927684s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:54.703653338 +0000 UTC m=+145.286344762" watchObservedRunningTime="2026-02-03 06:02:54.703927684 +0000 UTC m=+145.286619098" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.745488 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pqf2v" podStartSLOduration=119.745472829 podStartE2EDuration="1m59.745472829s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:54.744107807 +0000 UTC m=+145.326799221" watchObservedRunningTime="2026-02-03 06:02:54.745472829 +0000 UTC m=+145.328164243" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.787503 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.787884 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.287872215 +0000 UTC m=+145.870563629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.822359 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-f7wnn" podStartSLOduration=119.82234007 podStartE2EDuration="1m59.82234007s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:54.822012273 +0000 UTC m=+145.404703687" watchObservedRunningTime="2026-02-03 06:02:54.82234007 +0000 UTC m=+145.405031484" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.824471 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" podStartSLOduration=119.824464572 podStartE2EDuration="1m59.824464572s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:54.782933876 +0000 UTC m=+145.365625300" watchObservedRunningTime="2026-02-03 06:02:54.824464572 +0000 UTC m=+145.407155986" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.888001 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2jhdv" podStartSLOduration=119.887985493 podStartE2EDuration="1m59.887985493s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:54.88575663 +0000 UTC m=+145.468448044" watchObservedRunningTime="2026-02-03 06:02:54.887985493 +0000 UTC m=+145.470676907" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.888793 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.889108 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.389092 +0000 UTC m=+145.971783414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.929924 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8f5j7" podStartSLOduration=119.929813155 podStartE2EDuration="1m59.929813155s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:54.923983836 +0000 UTC m=+145.506675250" watchObservedRunningTime="2026-02-03 06:02:54.929813155 +0000 UTC m=+145.512504569" Feb 03 06:02:54 crc kubenswrapper[4872]: I0203 06:02:54.990290 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:54 crc kubenswrapper[4872]: E0203 06:02:54.990658 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.490643913 +0000 UTC m=+146.073335327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.064114 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rlbdg" podStartSLOduration=120.064098893 podStartE2EDuration="2m0.064098893s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:55.063100889 +0000 UTC m=+145.645792303" watchObservedRunningTime="2026-02-03 06:02:55.064098893 +0000 UTC m=+145.646790307" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.065738 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b7blm" podStartSLOduration=120.065731132 podStartE2EDuration="2m0.065731132s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:55.000912999 +0000 UTC m=+145.583604413" watchObservedRunningTime="2026-02-03 06:02:55.065731132 +0000 UTC m=+145.648422546" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.091343 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.091467 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.591438338 +0000 UTC m=+146.174129752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.091572 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.091604 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-crpxk" podStartSLOduration=120.091588412 podStartE2EDuration="2m0.091588412s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:55.090210689 +0000 UTC m=+145.672902103" watchObservedRunningTime="2026-02-03 06:02:55.091588412 +0000 UTC m=+145.674279826" Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.091926 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.591914929 +0000 UTC m=+146.174606343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.193537 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.193659 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.693641527 +0000 UTC m=+146.276332941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.193776 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.194051 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.694044826 +0000 UTC m=+146.276736240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.300073 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.300466 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.800451105 +0000 UTC m=+146.383142519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.402885 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.403411 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:55.903401701 +0000 UTC m=+146.486093115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.503812 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.504234 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.004215107 +0000 UTC m=+146.586906521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.547006 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" event={"ID":"b0135232-d15a-483f-9c5d-0047001f554a","Type":"ContainerStarted","Data":"4e78548199293c03154cf86aec2417f398135eb144c542b86071c90957f08493"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.549922 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" event={"ID":"5601e187-f5b1-43fb-9022-a8287b8d5488","Type":"ContainerStarted","Data":"d9aacf741ed197a2cbf1650ef64bac5f346e8c7d8cb7ac43e36974a18e2471d0"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.551942 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7m4zk" event={"ID":"4955a26c-c5ff-42cb-a72f-b37e3bfded04","Type":"ContainerStarted","Data":"69a47a1c8933edf7a872b6ea0417a7c64161e4b38aae78ed0a7cf9040fb0e0a1"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.556188 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mwrpx" event={"ID":"36f98287-87b6-41a4-b88b-0dfe63b17838","Type":"ContainerStarted","Data":"2cb2f71330493dbc6676185f59fded221e35423133d75a82e77a74f375ccde64"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.571642 4872 generic.go:334] "Generic (PLEG): container finished" podID="bc8ee41a-ef50-4114-b75a-75f65fe070c9" containerID="1bad06fe1c613a8003eb183f08e50ec85801a8d0940d04a8450b96ab63e99bb7" exitCode=0 Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.571729 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" event={"ID":"bc8ee41a-ef50-4114-b75a-75f65fe070c9","Type":"ContainerDied","Data":"1bad06fe1c613a8003eb183f08e50ec85801a8d0940d04a8450b96ab63e99bb7"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.579591 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" event={"ID":"a2e5c4f4-ca10-481c-803f-d49e6eb90295","Type":"ContainerStarted","Data":"4119339f9235398e39cbed2cac44a7f1821777405c31c9b7c79cfc56732a8d81"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.583916 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vnkkx" podStartSLOduration=120.583902416 podStartE2EDuration="2m0.583902416s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:55.581773956 +0000 UTC m=+146.164465380" watchObservedRunningTime="2026-02-03 06:02:55.583902416 +0000 UTC m=+146.166593830" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.592103 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" event={"ID":"aeef1cbf-eef5-48f8-b111-6b7244d686d4","Type":"ContainerStarted","Data":"1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.592962 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.600419 4872 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jq7th container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.600458 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" podUID="aeef1cbf-eef5-48f8-b111-6b7244d686d4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.601524 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" event={"ID":"56635a7e-1505-4486-978c-9aaf84e13929","Type":"ContainerStarted","Data":"1baed31b6639ac463c94a1130db07d33a3ad902b9d5b239eca775958a3284b9d"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.605144 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.606321 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.106309263 +0000 UTC m=+146.689000677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.607755 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mwrpx" podStartSLOduration=6.605091724 podStartE2EDuration="6.605091724s" podCreationTimestamp="2026-02-03 06:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:55.601668632 +0000 UTC m=+146.184360046" watchObservedRunningTime="2026-02-03 06:02:55.605091724 +0000 UTC m=+146.187783138" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.624635 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" event={"ID":"3aa95c63-1461-4ba3-87bf-948b24cb0e32","Type":"ContainerStarted","Data":"91e921cdde485b9be6f29cb848f24ed35e8c1589bb23fa46c1ba81fca211bf51"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.663101 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" event={"ID":"c4a4f181-a8cf-4a9c-b7cf-ce5c1af85846","Type":"ContainerStarted","Data":"4a9967fca5cef1fcb45a5169bef0cfd821ded1563015352dfa3b78627ea4abc2"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.695864 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7m4zk" podStartSLOduration=120.695845998 podStartE2EDuration="2m0.695845998s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:55.643805361 +0000 UTC m=+146.226496775" watchObservedRunningTime="2026-02-03 06:02:55.695845998 +0000 UTC m=+146.278537412" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.709084 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.710067 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.210051488 +0000 UTC m=+146.792742902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.717488 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" event={"ID":"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538","Type":"ContainerStarted","Data":"07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.717984 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.721884 4872 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-v5phf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.721921 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" podUID="46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.722369 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h5dtj" event={"ID":"1d3b0b03-3b43-4694-8d79-7a615a4f3dbc","Type":"ContainerStarted","Data":"c8df565ccc5912b62c406423fb243bd2df7536f4f8e755de050c7e939e627852"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.794250 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" event={"ID":"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed","Type":"ContainerStarted","Data":"77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.795195 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.798145 4872 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kf6f5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.798180 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" podUID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.805508 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gtcf2" podStartSLOduration=120.805494576 podStartE2EDuration="2m0.805494576s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:55.803793624 +0000 UTC m=+146.386485038" watchObservedRunningTime="2026-02-03 06:02:55.805494576 +0000 UTC m=+146.388185990" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.817412 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.820009 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.319997473 +0000 UTC m=+146.902688887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.825007 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" event={"ID":"7f8ddd98-412e-4a11-9cc2-07595b9cfdba","Type":"ContainerStarted","Data":"571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.825859 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.827039 4872 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4bk69 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.827074 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" podUID="7f8ddd98-412e-4a11-9cc2-07595b9cfdba" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.859873 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" event={"ID":"3892d90a-e1d2-42b3-82c4-efea782d8ba1","Type":"ContainerStarted","Data":"ae49bb157dda4016c6005492faf3c0d22fbc741db8b1d5e601be67b351093a56"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.883499 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" event={"ID":"f2edf464-7a14-4e97-87d9-1414f5b4818a","Type":"ContainerStarted","Data":"5cffb8909e244394b86fd7a455d941b51695c6e7fc27718f53ba248d79c6fb00"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.925573 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.926513 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.426497014 +0000 UTC m=+147.009188428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.926896 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:55 crc kubenswrapper[4872]: E0203 06:02:55.927564 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.42754921 +0000 UTC m=+147.010240624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.944535 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" event={"ID":"28287dc3-2b46-498f-9972-5a861374f4d5","Type":"ContainerStarted","Data":"4c63465b9759f99252eefa7aa3cf8b47dd0ea211173751d9a112c8299f41db3b"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.966314 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" event={"ID":"cc72bcdc-1e0b-4f9e-bc74-13ec703b15ec","Type":"ContainerStarted","Data":"a7235c3f651effd5ec099a209fa11c9e719c641c9daf5252c9c30121811a6970"} Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.982506 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.983343 4872 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ddbts container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 03 06:02:55 crc kubenswrapper[4872]: I0203 06:02:55.983373 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" podUID="f3285c9c-4471-4988-94e9-8ad1f65c7649" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.019591 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8v95g" podStartSLOduration=121.019575294 podStartE2EDuration="2m1.019575294s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:55.993631783 +0000 UTC m=+146.576323197" watchObservedRunningTime="2026-02-03 06:02:56.019575294 +0000 UTC m=+146.602266708" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.021256 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" event={"ID":"54e58fb8-e191-4fc5-ba7c-b7f401b15e00","Type":"ContainerStarted","Data":"579116d473856658d7da36aeed90f35c34c0b7862304c18b06c6ab876d8a3684"} Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.032203 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.032999 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.532985065 +0000 UTC m=+147.115676479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.046500 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h5dtj" podStartSLOduration=7.046481459 podStartE2EDuration="7.046481459s" podCreationTimestamp="2026-02-03 06:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.042864442 +0000 UTC m=+146.625555856" watchObservedRunningTime="2026-02-03 06:02:56.046481459 +0000 UTC m=+146.629172873" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.048194 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" podStartSLOduration=121.0481848 podStartE2EDuration="2m1.0481848s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.023637641 +0000 UTC m=+146.606329055" watchObservedRunningTime="2026-02-03 06:02:56.0481848 +0000 UTC m=+146.630876214" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.054614 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" event={"ID":"8492fe4a-5d99-4575-8d12-2ad50ecd6d07","Type":"ContainerStarted","Data":"ac7403623beff14809071c294bec6533530badea4f6792d7974a1a48b1955ad8"} Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.081744 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4wqw4" podStartSLOduration=121.081726643 podStartE2EDuration="2m1.081726643s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.080174106 +0000 UTC m=+146.662865530" watchObservedRunningTime="2026-02-03 06:02:56.081726643 +0000 UTC m=+146.664418057" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.092740 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" event={"ID":"9b57c265-ae84-4b43-b90f-c734a20b43b4","Type":"ContainerStarted","Data":"30fef627102b0ef509289c738dc9fd8c716f392ed5b1432ead5318aca2a764b9"} Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.103276 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" event={"ID":"3c1bfc9a-db7c-49a5-acd6-05ad2a616cae","Type":"ContainerStarted","Data":"054c10686f7aa14bd0bb9505b06130715306d3fbd8cc11cb771805699c2d4194"} Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.107602 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7wnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.107884 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7wnn" podUID="e0b641a4-8a66-4550-961f-c273bd9940e0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.107933 4872 patch_prober.go:28] interesting pod/console-operator-58897d9998-sz2ft container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.107946 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" podUID="f72f9246-edd6-47f2-8661-eacb3bdcb165" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.137500 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.139996 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.639963018 +0000 UTC m=+147.222654432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.157786 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" podStartSLOduration=121.157770075 podStartE2EDuration="2m1.157770075s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.101086007 +0000 UTC m=+146.683777421" watchObservedRunningTime="2026-02-03 06:02:56.157770075 +0000 UTC m=+146.740461489" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.158056 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" podStartSLOduration=121.158052882 podStartE2EDuration="2m1.158052882s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.156422533 +0000 UTC m=+146.739113957" watchObservedRunningTime="2026-02-03 06:02:56.158052882 +0000 UTC m=+146.740744296" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.221376 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" podStartSLOduration=121.221333468 podStartE2EDuration="2m1.221333468s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.21348057 +0000 UTC m=+146.796171984" watchObservedRunningTime="2026-02-03 06:02:56.221333468 +0000 UTC m=+146.804024882" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.238562 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.238829 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.738815607 +0000 UTC m=+147.321507021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.270003 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" podStartSLOduration=121.269988133 podStartE2EDuration="2m1.269988133s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.266747796 +0000 UTC m=+146.849439210" watchObservedRunningTime="2026-02-03 06:02:56.269988133 +0000 UTC m=+146.852679547" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.339795 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.340326 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.840315089 +0000 UTC m=+147.423006493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.343710 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" podStartSLOduration=121.343677009 podStartE2EDuration="2m1.343677009s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.342616254 +0000 UTC m=+146.925307668" watchObservedRunningTime="2026-02-03 06:02:56.343677009 +0000 UTC m=+146.926368423" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.343953 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-48f7z" podStartSLOduration=121.343949356 podStartE2EDuration="2m1.343949356s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.309037139 +0000 UTC m=+146.891728563" watchObservedRunningTime="2026-02-03 06:02:56.343949356 +0000 UTC m=+146.926640770" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.413461 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.420664 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dstl8" podStartSLOduration=121.420650093 podStartE2EDuration="2m1.420650093s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.371926406 +0000 UTC m=+146.954617820" watchObservedRunningTime="2026-02-03 06:02:56.420650093 +0000 UTC m=+147.003341507" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.420777 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zvqv2" podStartSLOduration=121.420773376 podStartE2EDuration="2m1.420773376s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.419839123 +0000 UTC m=+147.002530547" watchObservedRunningTime="2026-02-03 06:02:56.420773376 +0000 UTC m=+147.003464790" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.436823 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:02:56 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:02:56 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:02:56 crc kubenswrapper[4872]: healthz check failed Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.436873 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.442696 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.442857 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.942825655 +0000 UTC m=+147.525517069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.442951 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.443259 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:56.943248015 +0000 UTC m=+147.525939429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.544335 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.544729 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.044713726 +0000 UTC m=+147.627405140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.646184 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.646758 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.14674429 +0000 UTC m=+147.729435704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.747772 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.748177 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.24816208 +0000 UTC m=+147.830853494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.849294 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.849599 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.34958801 +0000 UTC m=+147.932279414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.950841 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.951018 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.45099015 +0000 UTC m=+148.033681564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:56 crc kubenswrapper[4872]: I0203 06:02:56.951351 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:56 crc kubenswrapper[4872]: E0203 06:02:56.951782 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.451765778 +0000 UTC m=+148.034457192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.052505 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.052812 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.552797838 +0000 UTC m=+148.135489252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.108830 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" event={"ID":"169884b0-3648-4ee3-a113-2ad95475c8e4","Type":"ContainerStarted","Data":"3d50e4baa53278c1893902b210414e1a0ee7261469878943dcda4fc808cae2e7"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.108873 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" event={"ID":"169884b0-3648-4ee3-a113-2ad95475c8e4","Type":"ContainerStarted","Data":"93ab697aa1d803348d2606cfa48af10b9abc9706f096987823cbcf358ead8218"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.108928 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.110051 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" event={"ID":"f3285c9c-4471-4988-94e9-8ad1f65c7649","Type":"ContainerStarted","Data":"a1b85ca3aa3ac106d2fd6b684be9aa1c09bbdfce1528ca685991b5559990b833"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.110518 4872 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ddbts container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.110553 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" podUID="f3285c9c-4471-4988-94e9-8ad1f65c7649" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.111974 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" event={"ID":"bc8ee41a-ef50-4114-b75a-75f65fe070c9","Type":"ContainerStarted","Data":"375ff45b168085aee3b255f8fb34aeb35f1934c76aef976d08ea63944a7092e7"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.112018 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.113397 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" event={"ID":"b0135232-d15a-483f-9c5d-0047001f554a","Type":"ContainerStarted","Data":"ea4420c30f29ee631be8644d4fa1aa18591dfd3d3c11b0a556436f65c3ac567f"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.114633 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s695g" event={"ID":"0002faa0-2d17-4c16-8634-913b9012788e","Type":"ContainerStarted","Data":"f6877a80aff88f3ec60b4f20728a01bb0f300803e618b3caee8616be934051df"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.116063 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" event={"ID":"2af55037-7ec8-4f97-96da-26ceb9c3a88f","Type":"ContainerStarted","Data":"84c2a43506197b7ebcf5dcb81463114aa8c61f77ff12336459e2ce2589325eb8"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.117673 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" event={"ID":"ee90c7f2-73c6-4d64-a164-39d0dbe68a21","Type":"ContainerStarted","Data":"a1bb029c4fd49206e8c1ed259e025894c90ed3818fd82a0b8af0364e9d7b2ded"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.123046 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" event={"ID":"54e58fb8-e191-4fc5-ba7c-b7f401b15e00","Type":"ContainerStarted","Data":"51907140e37a71b69135f9e170148b4c3dac568c1e0eff16f9265c6506a56e4c"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.124896 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" event={"ID":"56635a7e-1505-4486-978c-9aaf84e13929","Type":"ContainerStarted","Data":"173e8b8152f0db915549af7503d7371f7368560e78f1d999780aef409a20a89e"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.127198 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" event={"ID":"d546f30f-2c86-4925-8566-00e50b7875c7","Type":"ContainerStarted","Data":"733edc80dba1f3258805a02dbed201c07c7853cdc17adeebfe0fb31e27a92623"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.127221 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" event={"ID":"d546f30f-2c86-4925-8566-00e50b7875c7","Type":"ContainerStarted","Data":"fc6fe0269e6cb98bd1c9ef1eb1ef77a06adc9f3c225320251ea81cc59c515142"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.132727 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" event={"ID":"71180cf9-1370-4a11-9169-d7a121517ca4","Type":"ContainerStarted","Data":"56b74e76c97edc8529c78e04479651ee5bb0a4d2d6ab6618fa198a25866bc8e3"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.132996 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.134559 4872 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rfkk2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.134614 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" podUID="71180cf9-1370-4a11-9169-d7a121517ca4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.137026 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" event={"ID":"a2e5c4f4-ca10-481c-803f-d49e6eb90295","Type":"ContainerStarted","Data":"5a5613667f18aa52fa50dc462695afdccda2846f5037bc12d034e9d37db2ad35"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.141716 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" event={"ID":"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b","Type":"ContainerStarted","Data":"374ad55de6dd1ce24dfa3a71bb2a07f8dc23e76d0267693d783d3196b98a9a5e"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.141744 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" event={"ID":"e9f3f67f-ac6f-4037-949a-0ad5bc1dbd3b","Type":"ContainerStarted","Data":"947cd615b833a121bd3ef0120a8ec6dab577c3d8113b3eb0e886cf8e073ee1cf"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.144581 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" event={"ID":"f57116dc-68fe-4b40-8b1e-71b7d85153d0","Type":"ContainerStarted","Data":"734fd7f3fc9ddf2949e8940010f6fea869d7577206f839b28cebf6fb2c6224fb"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.145150 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.145957 4872 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frjwx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.145989 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" podUID="f57116dc-68fe-4b40-8b1e-71b7d85153d0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.149756 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xwb69" event={"ID":"212974c1-d4ce-40be-bc86-10539c144803","Type":"ContainerStarted","Data":"8e616fa48a7c58b45796a9cbb5b4da72f2b6f8ca999597b25603bbf560f008e7"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.149779 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xwb69" event={"ID":"212974c1-d4ce-40be-bc86-10539c144803","Type":"ContainerStarted","Data":"e0f6ffc6a5c86af70906eccae6cc6430144ab60efae72aa0e43d46dec19696eb"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.150157 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xwb69" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.153956 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.158907 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.6588815 +0000 UTC m=+148.241572914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.163054 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" event={"ID":"f2edf464-7a14-4e97-87d9-1414f5b4818a","Type":"ContainerStarted","Data":"96e37b2b27d361cded6e9d1ece2db6c14a5d5d5398fcca814f69190dcb435d12"} Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.163499 4872 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kf6f5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.163530 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" podUID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.165281 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ftfxg" podStartSLOduration=122.165270573 podStartE2EDuration="2m2.165270573s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:56.448005808 +0000 UTC m=+147.030697222" watchObservedRunningTime="2026-02-03 06:02:57.165270573 +0000 UTC m=+147.747961987" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.166592 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" podStartSLOduration=122.166586115 podStartE2EDuration="2m2.166586115s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.165426767 +0000 UTC m=+147.748118181" watchObservedRunningTime="2026-02-03 06:02:57.166586115 +0000 UTC m=+147.749277519" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.166820 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7wnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.166851 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7wnn" podUID="e0b641a4-8a66-4550-961f-c273bd9940e0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.172998 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.211606 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cljr9" podStartSLOduration=122.211590153 podStartE2EDuration="2m2.211590153s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.20939011 +0000 UTC m=+147.792081524" watchObservedRunningTime="2026-02-03 06:02:57.211590153 +0000 UTC m=+147.794281567" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.218010 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.254436 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.257810 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.75779405 +0000 UTC m=+148.340485474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.279387 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nj44h" podStartSLOduration=122.279370306 podStartE2EDuration="2m2.279370306s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.275191106 +0000 UTC m=+147.857882520" watchObservedRunningTime="2026-02-03 06:02:57.279370306 +0000 UTC m=+147.862061720" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.314491 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" podStartSLOduration=122.314471947 podStartE2EDuration="2m2.314471947s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.313546816 +0000 UTC m=+147.896238230" watchObservedRunningTime="2026-02-03 06:02:57.314471947 +0000 UTC m=+147.897163361" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.360665 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.361050 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.861037784 +0000 UTC m=+148.443729198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.412771 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:02:57 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:02:57 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:02:57 crc kubenswrapper[4872]: healthz check failed Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.412829 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.419434 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hzwlw" podStartSLOduration=122.419420962 podStartE2EDuration="2m2.419420962s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.4184647 +0000 UTC m=+148.001156114" watchObservedRunningTime="2026-02-03 06:02:57.419420962 +0000 UTC m=+148.002112376" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.420038 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kcltv" podStartSLOduration=122.420032617 podStartE2EDuration="2m2.420032617s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.383882991 +0000 UTC m=+147.966574405" watchObservedRunningTime="2026-02-03 06:02:57.420032617 +0000 UTC m=+148.002724031" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.461618 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.461790 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.461890 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.461914 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.461950 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.462568 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:57.962542205 +0000 UTC m=+148.545233619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.463797 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.470649 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.471338 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.497306 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.504974 4872 csr.go:261] certificate signing request csr-mrdpv is approved, waiting to be issued Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.552397 4872 csr.go:257] certificate signing request csr-mrdpv is issued Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.552811 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xwb69" podStartSLOduration=8.552799498 podStartE2EDuration="8.552799498s" podCreationTimestamp="2026-02-03 06:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.552220784 +0000 UTC m=+148.134912198" watchObservedRunningTime="2026-02-03 06:02:57.552799498 +0000 UTC m=+148.135490912" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.564439 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.564817 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.064804575 +0000 UTC m=+148.647495989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.646781 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.663721 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.665082 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.665248 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.165224221 +0000 UTC m=+148.747915635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.665359 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.665629 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.16562165 +0000 UTC m=+148.748313064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.680380 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.706548 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" podStartSLOduration=122.70653084 podStartE2EDuration="2m2.70653084s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.617695432 +0000 UTC m=+148.200386846" watchObservedRunningTime="2026-02-03 06:02:57.70653084 +0000 UTC m=+148.289222254" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.766998 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.767136 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.267111362 +0000 UTC m=+148.849802776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.767215 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.767468 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.26745657 +0000 UTC m=+148.850147984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.799347 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xwq84" podStartSLOduration=122.799330284 podStartE2EDuration="2m2.799330284s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.708230932 +0000 UTC m=+148.290922346" watchObservedRunningTime="2026-02-03 06:02:57.799330284 +0000 UTC m=+148.382021698" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.799583 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rl5n4" podStartSLOduration=122.79958037 podStartE2EDuration="2m2.79958037s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.7916768 +0000 UTC m=+148.374368214" watchObservedRunningTime="2026-02-03 06:02:57.79958037 +0000 UTC m=+148.382271784" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.868076 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.868334 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.368306327 +0000 UTC m=+148.950997741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.912219 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" podStartSLOduration=122.912202668 podStartE2EDuration="2m2.912202668s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.841868063 +0000 UTC m=+148.424559477" watchObservedRunningTime="2026-02-03 06:02:57.912202668 +0000 UTC m=+148.494894082" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.913070 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nrtrj" podStartSLOduration=122.913065429 podStartE2EDuration="2m2.913065429s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.91105657 +0000 UTC m=+148.493747984" watchObservedRunningTime="2026-02-03 06:02:57.913065429 +0000 UTC m=+148.495756833" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.938554 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" podStartSLOduration=122.938539239 podStartE2EDuration="2m2.938539239s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:57.931525471 +0000 UTC m=+148.514216885" watchObservedRunningTime="2026-02-03 06:02:57.938539239 +0000 UTC m=+148.521230643" Feb 03 06:02:57 crc kubenswrapper[4872]: I0203 06:02:57.969960 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:57 crc kubenswrapper[4872]: E0203 06:02:57.970275 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.470261049 +0000 UTC m=+149.052952463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.071885 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.072128 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.572101399 +0000 UTC m=+149.154792813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.072276 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.072598 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.572588471 +0000 UTC m=+149.155279885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.117622 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" podStartSLOduration=123.117603409 podStartE2EDuration="2m3.117603409s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:02:58.060730806 +0000 UTC m=+148.643422220" watchObservedRunningTime="2026-02-03 06:02:58.117603409 +0000 UTC m=+148.700294833" Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.169328 4872 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-v5phf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.169381 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" podUID="46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.175341 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.175621 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.675606629 +0000 UTC m=+149.258298043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.190098 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s695g" event={"ID":"0002faa0-2d17-4c16-8634-913b9012788e","Type":"ContainerStarted","Data":"5e5045ca49d69cd2f95b6781c3795035baad23afbf4861a03500518f87150337"} Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.192240 4872 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kf6f5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.192275 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" podUID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.284384 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rfkk2" Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.284626 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frjwx" Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.285673 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.289341 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.789327804 +0000 UTC m=+149.372019218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.389037 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.389193 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.889172886 +0000 UTC m=+149.471864300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.389579 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.389881 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.889870812 +0000 UTC m=+149.472562226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.414823 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:02:58 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:02:58 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:02:58 crc kubenswrapper[4872]: healthz check failed Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.414926 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.499323 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.499793 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:58.999774525 +0000 UTC m=+149.582465939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.558774 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-03 05:57:57 +0000 UTC, rotation deadline is 2026-11-19 04:43:26.862059724 +0000 UTC Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.558808 4872 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6934h40m28.303254775s for next certificate rotation Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.600441 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.600799 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.100788025 +0000 UTC m=+149.683479439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.701096 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.701540 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.201524189 +0000 UTC m=+149.784215603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.802352 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.802656 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.302645211 +0000 UTC m=+149.885336615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: W0203 06:02:58.841272 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-eb111b2d7a401fcfe441ee0ae1f11f0a321ac16469eb44b76ee2eda51e96b3bc WatchSource:0}: Error finding container eb111b2d7a401fcfe441ee0ae1f11f0a321ac16469eb44b76ee2eda51e96b3bc: Status 404 returned error can't find the container with id eb111b2d7a401fcfe441ee0ae1f11f0a321ac16469eb44b76ee2eda51e96b3bc Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.903070 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.903497 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.403471837 +0000 UTC m=+149.986163251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:58 crc kubenswrapper[4872]: I0203 06:02:58.903893 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:58 crc kubenswrapper[4872]: E0203 06:02:58.904172 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.404160094 +0000 UTC m=+149.986851498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.004829 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.005154 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.505135073 +0000 UTC m=+150.087826487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.106032 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.106316 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.606305037 +0000 UTC m=+150.188996451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.193502 4872 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ddbts container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.193569 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" podUID="f3285c9c-4471-4988-94e9-8ad1f65c7649" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.194863 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d1f0c5bc684d48ef9e42dcc3ee2ca73f88d8a5062e58289b70c3fb82c11ec828"} Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.195131 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"eb111b2d7a401fcfe441ee0ae1f11f0a321ac16469eb44b76ee2eda51e96b3bc"} Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.206626 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.207008 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.706974519 +0000 UTC m=+150.289665933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: W0203 06:02:59.220489 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-953b079f02bb76db3d4675094fde2bd0ad9153315e3057587a02a5c3aa5a23e1 WatchSource:0}: Error finding container 953b079f02bb76db3d4675094fde2bd0ad9153315e3057587a02a5c3aa5a23e1: Status 404 returned error can't find the container with id 953b079f02bb76db3d4675094fde2bd0ad9153315e3057587a02a5c3aa5a23e1 Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.308024 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.309276 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.809262129 +0000 UTC m=+150.391953543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.405051 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:02:59 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:02:59 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:02:59 crc kubenswrapper[4872]: healthz check failed Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.405139 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.409437 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.409758 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:02:59.909737047 +0000 UTC m=+150.492428461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.511466 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.511821 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.011809432 +0000 UTC m=+150.594500846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.612332 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.612476 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.112448493 +0000 UTC m=+150.695139907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.612791 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.613101 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.113094638 +0000 UTC m=+150.695786052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.713934 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.714371 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.214332514 +0000 UTC m=+150.797023918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.815386 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.815915 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.315904277 +0000 UTC m=+150.898595681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.911413 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r8kmz"] Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.912317 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.920772 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:02:59 crc kubenswrapper[4872]: E0203 06:02:59.921152 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.421136398 +0000 UTC m=+151.003827812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:02:59 crc kubenswrapper[4872]: I0203 06:02:59.937626 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.022535 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-catalog-content\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.022581 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-utilities\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.022634 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvhk\" (UniqueName: \"kubernetes.io/projected/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-kube-api-access-kvvhk\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.022703 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.022943 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.522930948 +0000 UTC m=+151.105622362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.036250 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8kmz"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.050241 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c8g49"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.051504 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.082551 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124184 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.124299 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.624278425 +0000 UTC m=+151.206969839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124371 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-catalog-content\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124403 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-catalog-content\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124430 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-utilities\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124470 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvvhk\" (UniqueName: \"kubernetes.io/projected/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-kube-api-access-kvvhk\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124494 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6kw\" (UniqueName: \"kubernetes.io/projected/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-kube-api-access-cg6kw\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124529 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124548 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-utilities\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.124965 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-catalog-content\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.125032 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.625015193 +0000 UTC m=+151.207706607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.125121 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-utilities\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.141059 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.141558 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.160127 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8g49"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.209387 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.209608 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.228466 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.228813 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6kw\" (UniqueName: \"kubernetes.io/projected/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-kube-api-access-cg6kw\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.228843 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.228898 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-utilities\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.228931 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-catalog-content\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.228953 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.229093 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.729077426 +0000 UTC m=+151.311768830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.229720 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-utilities\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.229945 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-catalog-content\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.237008 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9cd459bacae1279a088b41c33581f3db5d9115fbc26e5b88de86f642dec4889d"} Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.237052 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"953b079f02bb76db3d4675094fde2bd0ad9153315e3057587a02a5c3aa5a23e1"} Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.250828 4872 generic.go:334] "Generic (PLEG): container finished" podID="28287dc3-2b46-498f-9972-5a861374f4d5" containerID="4c63465b9759f99252eefa7aa3cf8b47dd0ea211173751d9a112c8299f41db3b" exitCode=0 Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.250922 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" event={"ID":"28287dc3-2b46-498f-9972-5a861374f4d5","Type":"ContainerDied","Data":"4c63465b9759f99252eefa7aa3cf8b47dd0ea211173751d9a112c8299f41db3b"} Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.293352 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.295369 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvvhk\" (UniqueName: \"kubernetes.io/projected/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-kube-api-access-kvvhk\") pod \"certified-operators-r8kmz\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.299370 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s695g" event={"ID":"0002faa0-2d17-4c16-8634-913b9012788e","Type":"ContainerStarted","Data":"644f9ef191b13db723ae153a392392587b3860c3997eb041a8684e9fad462f86"} Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.313530 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"90ea172385a118467ca20a4ba09fcf55c7cc088c736733f9e82e91759644ca67"} Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.313583 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a38c07a17b678bd007c9342d517791fb11d7351b0271c09c664ac412b53b9ca2"} Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.314220 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.329984 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.330038 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.330077 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.331824 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.332207 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.832193167 +0000 UTC m=+151.414884571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.355726 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6kw\" (UniqueName: \"kubernetes.io/projected/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-kube-api-access-cg6kw\") pod \"community-operators-c8g49\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.380792 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prlzm"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.381973 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.382294 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.413968 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:00 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:00 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:00 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.414018 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.425485 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prlzm"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.426105 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.431736 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.431957 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt9wg\" (UniqueName: \"kubernetes.io/projected/23276bc5-33eb-47db-8136-4576dbf6b945-kube-api-access-lt9wg\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.431983 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-catalog-content\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.432040 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-utilities\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.432613 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:00.932594452 +0000 UTC m=+151.515285866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.484396 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mrvtj"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.485236 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.488987 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.526997 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mrvtj"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.533260 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt9wg\" (UniqueName: \"kubernetes.io/projected/23276bc5-33eb-47db-8136-4576dbf6b945-kube-api-access-lt9wg\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.533296 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-utilities\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.533316 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-catalog-content\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.533348 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-catalog-content\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.533365 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwj9f\" (UniqueName: \"kubernetes.io/projected/fbc6a05e-2681-43a1-9c7a-5b20f879161a-kube-api-access-bwj9f\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.533391 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-utilities\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.533431 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.533728 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.033718245 +0000 UTC m=+151.616409649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.535019 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-catalog-content\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.535230 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-utilities\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.540378 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.559707 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt9wg\" (UniqueName: \"kubernetes.io/projected/23276bc5-33eb-47db-8136-4576dbf6b945-kube-api-access-lt9wg\") pod \"certified-operators-prlzm\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.590068 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.590662 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.602971 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.620782 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.635832 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.636072 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-utilities\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.636112 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-catalog-content\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.636130 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwj9f\" (UniqueName: \"kubernetes.io/projected/fbc6a05e-2681-43a1-9c7a-5b20f879161a-kube-api-access-bwj9f\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.636162 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e016f72-4669-4f8c-aba5-0f831b2555d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5e016f72-4669-4f8c-aba5-0f831b2555d7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.636209 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e016f72-4669-4f8c-aba5-0f831b2555d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5e016f72-4669-4f8c-aba5-0f831b2555d7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.636327 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.136310433 +0000 UTC m=+151.719001847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.639360 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-catalog-content\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.640969 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-utilities\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.660624 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.721232 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwj9f\" (UniqueName: \"kubernetes.io/projected/fbc6a05e-2681-43a1-9c7a-5b20f879161a-kube-api-access-bwj9f\") pod \"community-operators-mrvtj\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.738263 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.738514 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e016f72-4669-4f8c-aba5-0f831b2555d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5e016f72-4669-4f8c-aba5-0f831b2555d7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.738581 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e016f72-4669-4f8c-aba5-0f831b2555d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5e016f72-4669-4f8c-aba5-0f831b2555d7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.738639 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e016f72-4669-4f8c-aba5-0f831b2555d7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5e016f72-4669-4f8c-aba5-0f831b2555d7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.738906 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.23889211 +0000 UTC m=+151.821583524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.753846 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.812372 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e016f72-4669-4f8c-aba5-0f831b2555d7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5e016f72-4669-4f8c-aba5-0f831b2555d7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.822949 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.849082 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.849521 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.34949528 +0000 UTC m=+151.932186694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.853338 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.853618 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.353608399 +0000 UTC m=+151.936299813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.956577 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.956867 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.456840892 +0000 UTC m=+152.039532306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.956959 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:00 crc kubenswrapper[4872]: E0203 06:03:00.957252 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.457240712 +0000 UTC m=+152.039932126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:00 crc kubenswrapper[4872]: I0203 06:03:00.960034 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.058668 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.059069 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.559054111 +0000 UTC m=+152.141745515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.164329 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.164595 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.664583969 +0000 UTC m=+152.247275383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.256151 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fz7px" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.267570 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.268025 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.768011217 +0000 UTC m=+152.350702631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.274850 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.274921 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.351173 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s695g" event={"ID":"0002faa0-2d17-4c16-8634-913b9012788e","Type":"ContainerStarted","Data":"2ccefe2ce33fcd3a6dd08d0dcb53ee183efbf5a1b02c73b0dbbafa3415b91eaa"} Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.369961 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.370299 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.870287137 +0000 UTC m=+152.452978541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.410462 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:01 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:01 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:01 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.410506 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.411699 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-s695g" podStartSLOduration=12.411664649 podStartE2EDuration="12.411664649s" podCreationTimestamp="2026-02-03 06:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:03:01.410979892 +0000 UTC m=+151.993671306" watchObservedRunningTime="2026-02-03 06:03:01.411664649 +0000 UTC m=+151.994356063" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.471872 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.472989 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:01.972970137 +0000 UTC m=+152.555661571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.557260 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8g49"] Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.577016 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.577535 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.077519462 +0000 UTC m=+152.660210876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.678277 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.678921 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.178903721 +0000 UTC m=+152.761595135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.687008 4872 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.724069 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.724792 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.732900 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.741485 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.781294 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.782428 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.282418111 +0000 UTC m=+152.865109525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.784395 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.784583 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.813487 4872 patch_prober.go:28] interesting pod/apiserver-76f77b778f-46ccx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]log ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]etcd ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/generic-apiserver-start-informers ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/max-in-flight-filter ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 03 06:03:01 crc kubenswrapper[4872]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 03 06:03:01 crc kubenswrapper[4872]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/project.openshift.io-projectcache ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/openshift.io-startinformers ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 03 06:03:01 crc kubenswrapper[4872]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 03 06:03:01 crc kubenswrapper[4872]: livez check failed Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.813534 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" podUID="d546f30f-2c86-4925-8566-00e50b7875c7" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.879899 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prlzm"] Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.885137 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.886270 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.38625572 +0000 UTC m=+152.968947134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:01 crc kubenswrapper[4872]: I0203 06:03:01.996317 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:01 crc kubenswrapper[4872]: E0203 06:03:01.996863 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.496850869 +0000 UTC m=+153.079542283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.049003 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7wnn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.049058 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7wnn" podUID="e0b641a4-8a66-4550-961f-c273bd9940e0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.049164 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.049196 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.049250 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7wnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.049294 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7wnn" podUID="e0b641a4-8a66-4550-961f-c273bd9940e0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.054737 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sz2ft" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.064997 4872 patch_prober.go:28] interesting pod/console-f9d7485db-rlbdg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.065043 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rlbdg" podUID="0180e076-5e8c-4190-bd67-569e2f915913" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.070482 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcjj"] Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.071425 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.077260 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.084112 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.100257 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:02 crc kubenswrapper[4872]: E0203 06:03:02.101296 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.601281321 +0000 UTC m=+153.183972725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.115797 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcjj"] Feb 03 06:03:02 crc kubenswrapper[4872]: W0203 06:03:02.178672 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3270edf2_a9cf_47e7_8e06_90f0f97fb13f.slice/crio-17575f68971135f84f4e5c4db39021adc408c198a14bec27d0190e75d5a94e81 WatchSource:0}: Error finding container 17575f68971135f84f4e5c4db39021adc408c198a14bec27d0190e75d5a94e81: Status 404 returned error can't find the container with id 17575f68971135f84f4e5c4db39021adc408c198a14bec27d0190e75d5a94e81 Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.204320 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdwrt\" (UniqueName: \"kubernetes.io/projected/176a4bdd-a364-4676-bd0f-2a7d286fdab9-kube-api-access-kdwrt\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.204363 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.204529 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-catalog-content\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.204555 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-utilities\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: E0203 06:03:02.206927 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.706914942 +0000 UTC m=+153.289606356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.207095 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r8kmz"] Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.207121 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mrvtj"] Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.305828 4872 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-03T06:03:01.687027186Z","Handler":null,"Name":""} Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.306126 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:02 crc kubenswrapper[4872]: E0203 06:03:02.306193 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.806175499 +0000 UTC m=+153.388866913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.306628 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdwrt\" (UniqueName: \"kubernetes.io/projected/176a4bdd-a364-4676-bd0f-2a7d286fdab9-kube-api-access-kdwrt\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.306741 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.306948 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-catalog-content\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.307027 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-utilities\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: E0203 06:03:02.307460 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 06:03:02.80744579 +0000 UTC m=+153.390137204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7xf8z" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.307570 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-utilities\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.312000 4872 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.312099 4872 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.313990 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-catalog-content\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.373244 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrvtj" event={"ID":"fbc6a05e-2681-43a1-9c7a-5b20f879161a","Type":"ContainerStarted","Data":"18bda3e2fa2abdeede2cfcc218b659e39970a0fa49c0d2b306dfa619da804993"} Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.376606 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8kmz" event={"ID":"3270edf2-a9cf-47e7-8e06-90f0f97fb13f","Type":"ContainerStarted","Data":"17575f68971135f84f4e5c4db39021adc408c198a14bec27d0190e75d5a94e81"} Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.382479 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdwrt\" (UniqueName: \"kubernetes.io/projected/176a4bdd-a364-4676-bd0f-2a7d286fdab9-kube-api-access-kdwrt\") pod \"redhat-marketplace-rlcjj\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.392857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bd409686-8b1a-46bb-83aa-dcc86bad79d0","Type":"ContainerStarted","Data":"872c22b9a17d63c7012d74a28dfdc23dfedfc76fbdf1ab2e7fb9497ea53cd6d2"} Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.399714 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.402968 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prlzm" event={"ID":"23276bc5-33eb-47db-8136-4576dbf6b945","Type":"ContainerStarted","Data":"f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593"} Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.403005 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prlzm" event={"ID":"23276bc5-33eb-47db-8136-4576dbf6b945","Type":"ContainerStarted","Data":"7f30c9507c70f1c74a0b6071afa8bc73e02b4cfc8b0c0a0d84c660d58037f00d"} Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.403859 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:02 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:02 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:02 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.403919 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.409270 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.410109 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-crj46"] Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.415089 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.421982 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.431033 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crj46"] Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.435633 4872 generic.go:334] "Generic (PLEG): container finished" podID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerID="e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a" exitCode=0 Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.435752 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8g49" event={"ID":"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc","Type":"ContainerDied","Data":"e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a"} Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.435793 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8g49" event={"ID":"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc","Type":"ContainerStarted","Data":"59df17169aee6c69d64fe96101e792d63f6bcf93b168770efd8cdad71bba133d"} Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.437954 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.449046 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w5ttc" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.457943 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ddbts" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.500926 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.509956 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdzng\" (UniqueName: \"kubernetes.io/projected/28287dc3-2b46-498f-9972-5a861374f4d5-kube-api-access-sdzng\") pod \"28287dc3-2b46-498f-9972-5a861374f4d5\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.510039 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28287dc3-2b46-498f-9972-5a861374f4d5-secret-volume\") pod \"28287dc3-2b46-498f-9972-5a861374f4d5\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.510116 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28287dc3-2b46-498f-9972-5a861374f4d5-config-volume\") pod \"28287dc3-2b46-498f-9972-5a861374f4d5\" (UID: \"28287dc3-2b46-498f-9972-5a861374f4d5\") " Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.510329 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qn8\" (UniqueName: \"kubernetes.io/projected/4b8fc51f-2388-428e-93c9-e6f2a1b14740-kube-api-access-r7qn8\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.510396 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-utilities\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.510521 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-catalog-content\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.517305 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28287dc3-2b46-498f-9972-5a861374f4d5-config-volume" (OuterVolumeSpecName: "config-volume") pod "28287dc3-2b46-498f-9972-5a861374f4d5" (UID: "28287dc3-2b46-498f-9972-5a861374f4d5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.522992 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.528444 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28287dc3-2b46-498f-9972-5a861374f4d5-kube-api-access-sdzng" (OuterVolumeSpecName: "kube-api-access-sdzng") pod "28287dc3-2b46-498f-9972-5a861374f4d5" (UID: "28287dc3-2b46-498f-9972-5a861374f4d5"). InnerVolumeSpecName "kube-api-access-sdzng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.532209 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28287dc3-2b46-498f-9972-5a861374f4d5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28287dc3-2b46-498f-9972-5a861374f4d5" (UID: "28287dc3-2b46-498f-9972-5a861374f4d5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:03:02 crc kubenswrapper[4872]: W0203 06:03:02.547868 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5e016f72_4669_4f8c_aba5_0f831b2555d7.slice/crio-69cc2e3780feb639efe909237ea3171a82403c3cd465c5aa3fb8722fd1153124 WatchSource:0}: Error finding container 69cc2e3780feb639efe909237ea3171a82403c3cd465c5aa3fb8722fd1153124: Status 404 returned error can't find the container with id 69cc2e3780feb639efe909237ea3171a82403c3cd465c5aa3fb8722fd1153124 Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.548710 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.561370 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.612326 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-utilities\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.612437 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.612463 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-catalog-content\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.612540 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qn8\" (UniqueName: \"kubernetes.io/projected/4b8fc51f-2388-428e-93c9-e6f2a1b14740-kube-api-access-r7qn8\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.612589 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28287dc3-2b46-498f-9972-5a861374f4d5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.612604 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28287dc3-2b46-498f-9972-5a861374f4d5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.612616 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdzng\" (UniqueName: \"kubernetes.io/projected/28287dc3-2b46-498f-9972-5a861374f4d5-kube-api-access-sdzng\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.613409 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-utilities\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.614188 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-catalog-content\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.630991 4872 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.631026 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.658105 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qn8\" (UniqueName: \"kubernetes.io/projected/4b8fc51f-2388-428e-93c9-e6f2a1b14740-kube-api-access-r7qn8\") pod \"redhat-marketplace-crj46\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.755939 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:02 crc kubenswrapper[4872]: I0203 06:03:02.826889 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7xf8z\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.026654 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d755m"] Feb 03 06:03:03 crc kubenswrapper[4872]: E0203 06:03:03.027249 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28287dc3-2b46-498f-9972-5a861374f4d5" containerName="collect-profiles" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.027261 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="28287dc3-2b46-498f-9972-5a861374f4d5" containerName="collect-profiles" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.027425 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="28287dc3-2b46-498f-9972-5a861374f4d5" containerName="collect-profiles" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.028466 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.039888 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.041551 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d755m"] Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.131141 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.131758 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs475\" (UniqueName: \"kubernetes.io/projected/284c00ef-fffa-4fa7-8545-91f490740333-kube-api-access-rs475\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.131833 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-catalog-content\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.131920 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-utilities\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.234272 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-utilities\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.234335 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs475\" (UniqueName: \"kubernetes.io/projected/284c00ef-fffa-4fa7-8545-91f490740333-kube-api-access-rs475\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.234368 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-catalog-content\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.235156 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-catalog-content\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.235934 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-utilities\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.286309 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs475\" (UniqueName: \"kubernetes.io/projected/284c00ef-fffa-4fa7-8545-91f490740333-kube-api-access-rs475\") pod \"redhat-operators-d755m\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.330325 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcjj"] Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.392730 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wswtm"] Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.395263 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.398114 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.406307 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:03 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:03 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:03 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.406374 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.424371 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wswtm"] Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.439591 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6g84\" (UniqueName: \"kubernetes.io/projected/5d42681c-20a8-4b9c-843e-89102105dcfe-kube-api-access-z6g84\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.439667 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-catalog-content\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.439764 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-utilities\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.531041 4872 generic.go:334] "Generic (PLEG): container finished" podID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerID="4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb" exitCode=0 Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.531135 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrvtj" event={"ID":"fbc6a05e-2681-43a1-9c7a-5b20f879161a","Type":"ContainerDied","Data":"4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb"} Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.546135 4872 generic.go:334] "Generic (PLEG): container finished" podID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerID="ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a" exitCode=0 Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.546190 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8kmz" event={"ID":"3270edf2-a9cf-47e7-8e06-90f0f97fb13f","Type":"ContainerDied","Data":"ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a"} Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.547267 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6g84\" (UniqueName: \"kubernetes.io/projected/5d42681c-20a8-4b9c-843e-89102105dcfe-kube-api-access-z6g84\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.547326 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-catalog-content\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.547387 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-utilities\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.547869 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-utilities\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.548191 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-catalog-content\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.569665 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crj46"] Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.588838 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bd409686-8b1a-46bb-83aa-dcc86bad79d0","Type":"ContainerStarted","Data":"44f83bc3733b7e701b37de7af055018064a051f7dd33fd7de974dc7771733c6c"} Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.602535 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6g84\" (UniqueName: \"kubernetes.io/projected/5d42681c-20a8-4b9c-843e-89102105dcfe-kube-api-access-z6g84\") pod \"redhat-operators-wswtm\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: W0203 06:03:03.603066 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b8fc51f_2388_428e_93c9_e6f2a1b14740.slice/crio-2d4bb0fe9b60319b27b9044202ec580bc4006481d1f968a9191a435359829ac2 WatchSource:0}: Error finding container 2d4bb0fe9b60319b27b9044202ec580bc4006481d1f968a9191a435359829ac2: Status 404 returned error can't find the container with id 2d4bb0fe9b60319b27b9044202ec580bc4006481d1f968a9191a435359829ac2 Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.605326 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcjj" event={"ID":"176a4bdd-a364-4676-bd0f-2a7d286fdab9","Type":"ContainerStarted","Data":"d7a2d1bd74e18399bb9780b2d1a411d47019585056782f73c7c99f93151d0b7a"} Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.608840 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5e016f72-4669-4f8c-aba5-0f831b2555d7","Type":"ContainerStarted","Data":"15b3ed1c3850751db1a8264363cd155e1ca88c197dd9f6f87083c32740bbf8b7"} Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.608880 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5e016f72-4669-4f8c-aba5-0f831b2555d7","Type":"ContainerStarted","Data":"69cc2e3780feb639efe909237ea3171a82403c3cd465c5aa3fb8722fd1153124"} Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.616217 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" event={"ID":"28287dc3-2b46-498f-9972-5a861374f4d5","Type":"ContainerDied","Data":"a139bf8ee1f5d7c87bc467809f460abf23c45947c0db7371270446a5073630f4"} Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.616257 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a139bf8ee1f5d7c87bc467809f460abf23c45947c0db7371270446a5073630f4" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.616336 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.628939 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.62892033 podStartE2EDuration="4.62892033s" podCreationTimestamp="2026-02-03 06:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:03:03.619411172 +0000 UTC m=+154.202102586" watchObservedRunningTime="2026-02-03 06:03:03.62892033 +0000 UTC m=+154.211611744" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.642053 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.642035655 podStartE2EDuration="3.642035655s" podCreationTimestamp="2026-02-03 06:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:03:03.640405285 +0000 UTC m=+154.223096699" watchObservedRunningTime="2026-02-03 06:03:03.642035655 +0000 UTC m=+154.224727059" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.669525 4872 generic.go:334] "Generic (PLEG): container finished" podID="23276bc5-33eb-47db-8136-4576dbf6b945" containerID="f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593" exitCode=0 Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.670219 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prlzm" event={"ID":"23276bc5-33eb-47db-8136-4576dbf6b945","Type":"ContainerDied","Data":"f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593"} Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.745168 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.823990 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d755m"] Feb 03 06:03:03 crc kubenswrapper[4872]: I0203 06:03:03.880395 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7xf8z"] Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.061457 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wswtm"] Feb 03 06:03:04 crc kubenswrapper[4872]: W0203 06:03:04.071843 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d42681c_20a8_4b9c_843e_89102105dcfe.slice/crio-64320244273a6cde7b9f990152934660a863587e71613a327589e9f8f86262e8 WatchSource:0}: Error finding container 64320244273a6cde7b9f990152934660a863587e71613a327589e9f8f86262e8: Status 404 returned error can't find the container with id 64320244273a6cde7b9f990152934660a863587e71613a327589e9f8f86262e8 Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.140852 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.404028 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:04 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:04 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:04 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.404086 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.683848 4872 generic.go:334] "Generic (PLEG): container finished" podID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerID="e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4" exitCode=0 Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.684087 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crj46" event={"ID":"4b8fc51f-2388-428e-93c9-e6f2a1b14740","Type":"ContainerDied","Data":"e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.684151 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crj46" event={"ID":"4b8fc51f-2388-428e-93c9-e6f2a1b14740","Type":"ContainerStarted","Data":"2d4bb0fe9b60319b27b9044202ec580bc4006481d1f968a9191a435359829ac2"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.691348 4872 generic.go:334] "Generic (PLEG): container finished" podID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerID="e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098" exitCode=0 Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.691815 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wswtm" event={"ID":"5d42681c-20a8-4b9c-843e-89102105dcfe","Type":"ContainerDied","Data":"e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.691841 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wswtm" event={"ID":"5d42681c-20a8-4b9c-843e-89102105dcfe","Type":"ContainerStarted","Data":"64320244273a6cde7b9f990152934660a863587e71613a327589e9f8f86262e8"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.695457 4872 generic.go:334] "Generic (PLEG): container finished" podID="bd409686-8b1a-46bb-83aa-dcc86bad79d0" containerID="44f83bc3733b7e701b37de7af055018064a051f7dd33fd7de974dc7771733c6c" exitCode=0 Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.695493 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bd409686-8b1a-46bb-83aa-dcc86bad79d0","Type":"ContainerDied","Data":"44f83bc3733b7e701b37de7af055018064a051f7dd33fd7de974dc7771733c6c"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.701627 4872 generic.go:334] "Generic (PLEG): container finished" podID="284c00ef-fffa-4fa7-8545-91f490740333" containerID="9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c" exitCode=0 Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.701718 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d755m" event={"ID":"284c00ef-fffa-4fa7-8545-91f490740333","Type":"ContainerDied","Data":"9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.701771 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d755m" event={"ID":"284c00ef-fffa-4fa7-8545-91f490740333","Type":"ContainerStarted","Data":"874a035a16c9bb430d29fbb80dd005db94e2d5f3875067dd8fb03acb13b1df60"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.706063 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" event={"ID":"7976c56b-b1e2-432b-9abb-d88e6483c5bc","Type":"ContainerStarted","Data":"b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.706101 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" event={"ID":"7976c56b-b1e2-432b-9abb-d88e6483c5bc","Type":"ContainerStarted","Data":"0704f618eddf0203f546e9391169d0a9e1587b9baaffbc2d7f8325aa5cfc784c"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.706871 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.709613 4872 generic.go:334] "Generic (PLEG): container finished" podID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerID="558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3" exitCode=0 Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.709639 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcjj" event={"ID":"176a4bdd-a364-4676-bd0f-2a7d286fdab9","Type":"ContainerDied","Data":"558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.721851 4872 generic.go:334] "Generic (PLEG): container finished" podID="5e016f72-4669-4f8c-aba5-0f831b2555d7" containerID="15b3ed1c3850751db1a8264363cd155e1ca88c197dd9f6f87083c32740bbf8b7" exitCode=0 Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.722243 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5e016f72-4669-4f8c-aba5-0f831b2555d7","Type":"ContainerDied","Data":"15b3ed1c3850751db1a8264363cd155e1ca88c197dd9f6f87083c32740bbf8b7"} Feb 03 06:03:04 crc kubenswrapper[4872]: I0203 06:03:04.755861 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" podStartSLOduration=129.755840039 podStartE2EDuration="2m9.755840039s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:03:04.749379194 +0000 UTC m=+155.332070608" watchObservedRunningTime="2026-02-03 06:03:04.755840039 +0000 UTC m=+155.338531453" Feb 03 06:03:05 crc kubenswrapper[4872]: I0203 06:03:05.403384 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:05 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:05 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:05 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:05 crc kubenswrapper[4872]: I0203 06:03:05.403428 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.016478 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.088344 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.107964 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e016f72-4669-4f8c-aba5-0f831b2555d7-kube-api-access\") pod \"5e016f72-4669-4f8c-aba5-0f831b2555d7\" (UID: \"5e016f72-4669-4f8c-aba5-0f831b2555d7\") " Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.109473 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e016f72-4669-4f8c-aba5-0f831b2555d7-kubelet-dir\") pod \"5e016f72-4669-4f8c-aba5-0f831b2555d7\" (UID: \"5e016f72-4669-4f8c-aba5-0f831b2555d7\") " Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.109977 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kube-api-access\") pod \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\" (UID: \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\") " Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.109801 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e016f72-4669-4f8c-aba5-0f831b2555d7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e016f72-4669-4f8c-aba5-0f831b2555d7" (UID: "5e016f72-4669-4f8c-aba5-0f831b2555d7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.112679 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kubelet-dir\") pod \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\" (UID: \"bd409686-8b1a-46bb-83aa-dcc86bad79d0\") " Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.112808 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bd409686-8b1a-46bb-83aa-dcc86bad79d0" (UID: "bd409686-8b1a-46bb-83aa-dcc86bad79d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.114473 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bd409686-8b1a-46bb-83aa-dcc86bad79d0" (UID: "bd409686-8b1a-46bb-83aa-dcc86bad79d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.117167 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e016f72-4669-4f8c-aba5-0f831b2555d7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e016f72-4669-4f8c-aba5-0f831b2555d7" (UID: "5e016f72-4669-4f8c-aba5-0f831b2555d7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.117515 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e016f72-4669-4f8c-aba5-0f831b2555d7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.117528 4872 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e016f72-4669-4f8c-aba5-0f831b2555d7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.117536 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.117545 4872 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd409686-8b1a-46bb-83aa-dcc86bad79d0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.400467 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:06 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:06 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:06 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.400520 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.780711 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.780726 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5e016f72-4669-4f8c-aba5-0f831b2555d7","Type":"ContainerDied","Data":"69cc2e3780feb639efe909237ea3171a82403c3cd465c5aa3fb8722fd1153124"} Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.781380 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69cc2e3780feb639efe909237ea3171a82403c3cd465c5aa3fb8722fd1153124" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.789075 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.792576 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-46ccx" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.794732 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.795149 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bd409686-8b1a-46bb-83aa-dcc86bad79d0","Type":"ContainerDied","Data":"872c22b9a17d63c7012d74a28dfdc23dfedfc76fbdf1ab2e7fb9497ea53cd6d2"} Feb 03 06:03:06 crc kubenswrapper[4872]: I0203 06:03:06.795170 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="872c22b9a17d63c7012d74a28dfdc23dfedfc76fbdf1ab2e7fb9497ea53cd6d2" Feb 03 06:03:07 crc kubenswrapper[4872]: I0203 06:03:07.401063 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:07 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:07 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:07 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:07 crc kubenswrapper[4872]: I0203 06:03:07.401109 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:07 crc kubenswrapper[4872]: I0203 06:03:07.738922 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xwb69" Feb 03 06:03:08 crc kubenswrapper[4872]: I0203 06:03:08.399265 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:08 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:08 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:08 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:08 crc kubenswrapper[4872]: I0203 06:03:08.399490 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:09 crc kubenswrapper[4872]: I0203 06:03:09.399788 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:09 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:09 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:09 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:09 crc kubenswrapper[4872]: I0203 06:03:09.399851 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:10 crc kubenswrapper[4872]: I0203 06:03:10.403128 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:10 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:10 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:10 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:10 crc kubenswrapper[4872]: I0203 06:03:10.403195 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:11 crc kubenswrapper[4872]: I0203 06:03:11.399278 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:11 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:11 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:11 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:11 crc kubenswrapper[4872]: I0203 06:03:11.399586 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:12 crc kubenswrapper[4872]: I0203 06:03:12.034854 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7wnn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 03 06:03:12 crc kubenswrapper[4872]: I0203 06:03:12.034923 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7wnn" podUID="e0b641a4-8a66-4550-961f-c273bd9940e0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 03 06:03:12 crc kubenswrapper[4872]: I0203 06:03:12.034874 4872 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7wnn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 03 06:03:12 crc kubenswrapper[4872]: I0203 06:03:12.035030 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7wnn" podUID="e0b641a4-8a66-4550-961f-c273bd9940e0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 03 06:03:12 crc kubenswrapper[4872]: I0203 06:03:12.046076 4872 patch_prober.go:28] interesting pod/console-f9d7485db-rlbdg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 03 06:03:12 crc kubenswrapper[4872]: I0203 06:03:12.046124 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rlbdg" podUID="0180e076-5e8c-4190-bd67-569e2f915913" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 03 06:03:12 crc kubenswrapper[4872]: I0203 06:03:12.399609 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:12 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:12 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:12 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:12 crc kubenswrapper[4872]: I0203 06:03:12.399717 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:13 crc kubenswrapper[4872]: I0203 06:03:13.399787 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:13 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:13 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:13 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:13 crc kubenswrapper[4872]: I0203 06:03:13.400149 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:14 crc kubenswrapper[4872]: I0203 06:03:14.399106 4872 patch_prober.go:28] interesting pod/router-default-5444994796-7m4zk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 06:03:14 crc kubenswrapper[4872]: [-]has-synced failed: reason withheld Feb 03 06:03:14 crc kubenswrapper[4872]: [+]process-running ok Feb 03 06:03:14 crc kubenswrapper[4872]: healthz check failed Feb 03 06:03:14 crc kubenswrapper[4872]: I0203 06:03:14.399177 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7m4zk" podUID="4955a26c-c5ff-42cb-a72f-b37e3bfded04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:03:15 crc kubenswrapper[4872]: I0203 06:03:15.399912 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:03:15 crc kubenswrapper[4872]: I0203 06:03:15.401864 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7m4zk" Feb 03 06:03:18 crc kubenswrapper[4872]: I0203 06:03:18.249667 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:03:18 crc kubenswrapper[4872]: I0203 06:03:18.261480 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a14ad474-acae-486b-bac9-e5e20cc8ec2e-metrics-certs\") pod \"network-metrics-daemon-drpfn\" (UID: \"a14ad474-acae-486b-bac9-e5e20cc8ec2e\") " pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:03:18 crc kubenswrapper[4872]: I0203 06:03:18.394098 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drpfn" Feb 03 06:03:22 crc kubenswrapper[4872]: I0203 06:03:22.049137 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-f7wnn" Feb 03 06:03:22 crc kubenswrapper[4872]: I0203 06:03:22.052179 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:03:22 crc kubenswrapper[4872]: I0203 06:03:22.056639 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:03:23 crc kubenswrapper[4872]: I0203 06:03:23.139484 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:03:29 crc kubenswrapper[4872]: E0203 06:03:29.877008 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 06:03:29 crc kubenswrapper[4872]: E0203 06:03:29.877983 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6g84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wswtm_openshift-marketplace(5d42681c-20a8-4b9c-843e-89102105dcfe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 06:03:29 crc kubenswrapper[4872]: E0203 06:03:29.879259 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wswtm" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" Feb 03 06:03:31 crc kubenswrapper[4872]: I0203 06:03:31.271062 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:03:31 crc kubenswrapper[4872]: I0203 06:03:31.271749 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:03:32 crc kubenswrapper[4872]: E0203 06:03:32.408960 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wswtm" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" Feb 03 06:03:32 crc kubenswrapper[4872]: I0203 06:03:32.461030 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q9w6x" Feb 03 06:03:32 crc kubenswrapper[4872]: E0203 06:03:32.485254 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 06:03:32 crc kubenswrapper[4872]: E0203 06:03:32.485483 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdwrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rlcjj_openshift-marketplace(176a4bdd-a364-4676-bd0f-2a7d286fdab9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 06:03:32 crc kubenswrapper[4872]: E0203 06:03:32.486908 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rlcjj" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" Feb 03 06:03:32 crc kubenswrapper[4872]: E0203 06:03:32.527845 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 03 06:03:32 crc kubenswrapper[4872]: E0203 06:03:32.527996 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rs475,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-d755m_openshift-marketplace(284c00ef-fffa-4fa7-8545-91f490740333): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 06:03:32 crc kubenswrapper[4872]: E0203 06:03:32.529322 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-d755m" podUID="284c00ef-fffa-4fa7-8545-91f490740333" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.863939 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d755m" podUID="284c00ef-fffa-4fa7-8545-91f490740333" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.864459 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rlcjj" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.951870 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.951987 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwj9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mrvtj_openshift-marketplace(fbc6a05e-2681-43a1-9c7a-5b20f879161a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.960838 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.960920 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7qn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-crj46_openshift-marketplace(4b8fc51f-2388-428e-93c9-e6f2a1b14740): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.960988 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mrvtj" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.962634 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-crj46" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.971386 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.971483 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvvhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r8kmz_openshift-marketplace(3270edf2-a9cf-47e7-8e06-90f0f97fb13f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 06:03:33 crc kubenswrapper[4872]: E0203 06:03:33.973108 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-r8kmz" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" Feb 03 06:03:34 crc kubenswrapper[4872]: E0203 06:03:34.022111 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 03 06:03:34 crc kubenswrapper[4872]: E0203 06:03:34.022235 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt9wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-prlzm_openshift-marketplace(23276bc5-33eb-47db-8136-4576dbf6b945): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 03 06:03:34 crc kubenswrapper[4872]: E0203 06:03:34.023607 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-prlzm" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" Feb 03 06:03:34 crc kubenswrapper[4872]: I0203 06:03:34.303353 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-drpfn"] Feb 03 06:03:34 crc kubenswrapper[4872]: I0203 06:03:34.306421 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8g49" event={"ID":"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc","Type":"ContainerStarted","Data":"8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317"} Feb 03 06:03:34 crc kubenswrapper[4872]: E0203 06:03:34.308121 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-crj46" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" Feb 03 06:03:34 crc kubenswrapper[4872]: E0203 06:03:34.312010 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r8kmz" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" Feb 03 06:03:34 crc kubenswrapper[4872]: E0203 06:03:34.312252 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-prlzm" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" Feb 03 06:03:34 crc kubenswrapper[4872]: E0203 06:03:34.312301 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mrvtj" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" Feb 03 06:03:34 crc kubenswrapper[4872]: W0203 06:03:34.314464 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda14ad474_acae_486b_bac9_e5e20cc8ec2e.slice/crio-85c48686cccf40dc92d49220d93443da3fb779bf40bb57ffd22351b5ff14d4b6 WatchSource:0}: Error finding container 85c48686cccf40dc92d49220d93443da3fb779bf40bb57ffd22351b5ff14d4b6: Status 404 returned error can't find the container with id 85c48686cccf40dc92d49220d93443da3fb779bf40bb57ffd22351b5ff14d4b6 Feb 03 06:03:35 crc kubenswrapper[4872]: I0203 06:03:35.313376 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-drpfn" event={"ID":"a14ad474-acae-486b-bac9-e5e20cc8ec2e","Type":"ContainerStarted","Data":"4916c94a78a6d953e3fd486c0f88ef260e2f27802a12ea80d529024d52dc9b2c"} Feb 03 06:03:35 crc kubenswrapper[4872]: I0203 06:03:35.314076 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-drpfn" event={"ID":"a14ad474-acae-486b-bac9-e5e20cc8ec2e","Type":"ContainerStarted","Data":"2c73fe9f09f3c4852160613456badc13402c3b570cdfb0afd23c8c80aa6c63a9"} Feb 03 06:03:35 crc kubenswrapper[4872]: I0203 06:03:35.314104 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-drpfn" event={"ID":"a14ad474-acae-486b-bac9-e5e20cc8ec2e","Type":"ContainerStarted","Data":"85c48686cccf40dc92d49220d93443da3fb779bf40bb57ffd22351b5ff14d4b6"} Feb 03 06:03:35 crc kubenswrapper[4872]: I0203 06:03:35.317577 4872 generic.go:334] "Generic (PLEG): container finished" podID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerID="8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317" exitCode=0 Feb 03 06:03:35 crc kubenswrapper[4872]: I0203 06:03:35.317619 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8g49" event={"ID":"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc","Type":"ContainerDied","Data":"8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317"} Feb 03 06:03:35 crc kubenswrapper[4872]: I0203 06:03:35.346475 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-drpfn" podStartSLOduration=160.346455093 podStartE2EDuration="2m40.346455093s" podCreationTimestamp="2026-02-03 06:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:03:35.344919316 +0000 UTC m=+185.927610740" watchObservedRunningTime="2026-02-03 06:03:35.346455093 +0000 UTC m=+185.929146517" Feb 03 06:03:36 crc kubenswrapper[4872]: I0203 06:03:36.324621 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8g49" event={"ID":"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc","Type":"ContainerStarted","Data":"d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6"} Feb 03 06:03:36 crc kubenswrapper[4872]: I0203 06:03:36.344685 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c8g49" podStartSLOduration=4.058119294 podStartE2EDuration="37.344665138s" podCreationTimestamp="2026-02-03 06:02:59 +0000 UTC" firstStartedPulling="2026-02-03 06:03:02.44142529 +0000 UTC m=+153.024116704" lastFinishedPulling="2026-02-03 06:03:35.727971134 +0000 UTC m=+186.310662548" observedRunningTime="2026-02-03 06:03:36.342015635 +0000 UTC m=+186.924707069" watchObservedRunningTime="2026-02-03 06:03:36.344665138 +0000 UTC m=+186.927356552" Feb 03 06:03:37 crc kubenswrapper[4872]: I0203 06:03:37.670235 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 06:03:40 crc kubenswrapper[4872]: I0203 06:03:40.383204 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:40 crc kubenswrapper[4872]: I0203 06:03:40.384118 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:40 crc kubenswrapper[4872]: I0203 06:03:40.516800 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:40 crc kubenswrapper[4872]: I0203 06:03:40.754782 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v5phf"] Feb 03 06:03:41 crc kubenswrapper[4872]: I0203 06:03:41.376500 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.100842 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 06:03:44 crc kubenswrapper[4872]: E0203 06:03:44.101144 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd409686-8b1a-46bb-83aa-dcc86bad79d0" containerName="pruner" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.101166 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd409686-8b1a-46bb-83aa-dcc86bad79d0" containerName="pruner" Feb 03 06:03:44 crc kubenswrapper[4872]: E0203 06:03:44.101207 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e016f72-4669-4f8c-aba5-0f831b2555d7" containerName="pruner" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.101219 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e016f72-4669-4f8c-aba5-0f831b2555d7" containerName="pruner" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.101379 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e016f72-4669-4f8c-aba5-0f831b2555d7" containerName="pruner" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.101411 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd409686-8b1a-46bb-83aa-dcc86bad79d0" containerName="pruner" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.102037 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.104704 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.109252 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.127022 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.208255 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.208510 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.309854 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.309908 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.310003 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.326472 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.433659 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:44 crc kubenswrapper[4872]: I0203 06:03:44.814088 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 06:03:45 crc kubenswrapper[4872]: I0203 06:03:45.363110 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb","Type":"ContainerStarted","Data":"afc6ffa69c64ef9748f2d5d0b188ec08dea22d8779465e9a58a6770729aedc39"} Feb 03 06:03:45 crc kubenswrapper[4872]: I0203 06:03:45.363489 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb","Type":"ContainerStarted","Data":"2090cd1aeaa9e55b0422dcbd546a1a8652c40c246725323db4cab2714775facb"} Feb 03 06:03:45 crc kubenswrapper[4872]: I0203 06:03:45.381120 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.381096778 podStartE2EDuration="1.381096778s" podCreationTimestamp="2026-02-03 06:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:03:45.373527889 +0000 UTC m=+195.956219303" watchObservedRunningTime="2026-02-03 06:03:45.381096778 +0000 UTC m=+195.963788202" Feb 03 06:03:46 crc kubenswrapper[4872]: I0203 06:03:46.373565 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d755m" event={"ID":"284c00ef-fffa-4fa7-8545-91f490740333","Type":"ContainerStarted","Data":"91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19"} Feb 03 06:03:46 crc kubenswrapper[4872]: I0203 06:03:46.378367 4872 generic.go:334] "Generic (PLEG): container finished" podID="0c7aa53c-b385-42c1-be1c-6e87ad84c9bb" containerID="afc6ffa69c64ef9748f2d5d0b188ec08dea22d8779465e9a58a6770729aedc39" exitCode=0 Feb 03 06:03:46 crc kubenswrapper[4872]: I0203 06:03:46.378410 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb","Type":"ContainerDied","Data":"afc6ffa69c64ef9748f2d5d0b188ec08dea22d8779465e9a58a6770729aedc39"} Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.386622 4872 generic.go:334] "Generic (PLEG): container finished" podID="284c00ef-fffa-4fa7-8545-91f490740333" containerID="91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19" exitCode=0 Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.386932 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d755m" event={"ID":"284c00ef-fffa-4fa7-8545-91f490740333","Type":"ContainerDied","Data":"91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19"} Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.765251 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.860210 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kubelet-dir\") pod \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\" (UID: \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\") " Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.860308 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kube-api-access\") pod \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\" (UID: \"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb\") " Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.860366 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0c7aa53c-b385-42c1-be1c-6e87ad84c9bb" (UID: "0c7aa53c-b385-42c1-be1c-6e87ad84c9bb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.860535 4872 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.867623 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0c7aa53c-b385-42c1-be1c-6e87ad84c9bb" (UID: "0c7aa53c-b385-42c1-be1c-6e87ad84c9bb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:03:47 crc kubenswrapper[4872]: I0203 06:03:47.961863 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c7aa53c-b385-42c1-be1c-6e87ad84c9bb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.410771 4872 generic.go:334] "Generic (PLEG): container finished" podID="23276bc5-33eb-47db-8136-4576dbf6b945" containerID="9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e" exitCode=0 Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.410897 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prlzm" event={"ID":"23276bc5-33eb-47db-8136-4576dbf6b945","Type":"ContainerDied","Data":"9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e"} Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.417149 4872 generic.go:334] "Generic (PLEG): container finished" podID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerID="93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a" exitCode=0 Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.417231 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crj46" event={"ID":"4b8fc51f-2388-428e-93c9-e6f2a1b14740","Type":"ContainerDied","Data":"93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a"} Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.422991 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wswtm" event={"ID":"5d42681c-20a8-4b9c-843e-89102105dcfe","Type":"ContainerStarted","Data":"dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8"} Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.429217 4872 generic.go:334] "Generic (PLEG): container finished" podID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerID="052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d" exitCode=0 Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.429384 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8kmz" event={"ID":"3270edf2-a9cf-47e7-8e06-90f0f97fb13f","Type":"ContainerDied","Data":"052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d"} Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.438764 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d755m" event={"ID":"284c00ef-fffa-4fa7-8545-91f490740333","Type":"ContainerStarted","Data":"06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08"} Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.446999 4872 generic.go:334] "Generic (PLEG): container finished" podID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerID="1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265" exitCode=0 Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.447059 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcjj" event={"ID":"176a4bdd-a364-4676-bd0f-2a7d286fdab9","Type":"ContainerDied","Data":"1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265"} Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.450907 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0c7aa53c-b385-42c1-be1c-6e87ad84c9bb","Type":"ContainerDied","Data":"2090cd1aeaa9e55b0422dcbd546a1a8652c40c246725323db4cab2714775facb"} Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.450940 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2090cd1aeaa9e55b0422dcbd546a1a8652c40c246725323db4cab2714775facb" Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.450992 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 06:03:48 crc kubenswrapper[4872]: I0203 06:03:48.488501 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d755m" podStartSLOduration=3.376525958 podStartE2EDuration="46.488482932s" podCreationTimestamp="2026-02-03 06:03:02 +0000 UTC" firstStartedPulling="2026-02-03 06:03:04.703216758 +0000 UTC m=+155.285908172" lastFinishedPulling="2026-02-03 06:03:47.815173732 +0000 UTC m=+198.397865146" observedRunningTime="2026-02-03 06:03:48.487862396 +0000 UTC m=+199.070553820" watchObservedRunningTime="2026-02-03 06:03:48.488482932 +0000 UTC m=+199.071174346" Feb 03 06:03:49 crc kubenswrapper[4872]: I0203 06:03:49.457219 4872 generic.go:334] "Generic (PLEG): container finished" podID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerID="dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8" exitCode=0 Feb 03 06:03:49 crc kubenswrapper[4872]: I0203 06:03:49.457310 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wswtm" event={"ID":"5d42681c-20a8-4b9c-843e-89102105dcfe","Type":"ContainerDied","Data":"dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8"} Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.463399 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrvtj" event={"ID":"fbc6a05e-2681-43a1-9c7a-5b20f879161a","Type":"ContainerStarted","Data":"b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d"} Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.466093 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wswtm" event={"ID":"5d42681c-20a8-4b9c-843e-89102105dcfe","Type":"ContainerStarted","Data":"dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76"} Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.468225 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8kmz" event={"ID":"3270edf2-a9cf-47e7-8e06-90f0f97fb13f","Type":"ContainerStarted","Data":"865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d"} Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.469991 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcjj" event={"ID":"176a4bdd-a364-4676-bd0f-2a7d286fdab9","Type":"ContainerStarted","Data":"ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36"} Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.472564 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prlzm" event={"ID":"23276bc5-33eb-47db-8136-4576dbf6b945","Type":"ContainerStarted","Data":"46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be"} Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.474733 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crj46" event={"ID":"4b8fc51f-2388-428e-93c9-e6f2a1b14740","Type":"ContainerStarted","Data":"24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d"} Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.547784 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prlzm" podStartSLOduration=3.296159757 podStartE2EDuration="50.54776822s" podCreationTimestamp="2026-02-03 06:03:00 +0000 UTC" firstStartedPulling="2026-02-03 06:03:02.421762559 +0000 UTC m=+153.004453973" lastFinishedPulling="2026-02-03 06:03:49.673371022 +0000 UTC m=+200.256062436" observedRunningTime="2026-02-03 06:03:50.520534433 +0000 UTC m=+201.103225847" watchObservedRunningTime="2026-02-03 06:03:50.54776822 +0000 UTC m=+201.130459634" Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.548799 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rlcjj" podStartSLOduration=4.467010525 podStartE2EDuration="49.548794538s" podCreationTimestamp="2026-02-03 06:03:01 +0000 UTC" firstStartedPulling="2026-02-03 06:03:04.714840607 +0000 UTC m=+155.297532021" lastFinishedPulling="2026-02-03 06:03:49.79662462 +0000 UTC m=+200.379316034" observedRunningTime="2026-02-03 06:03:50.546203749 +0000 UTC m=+201.128895163" watchObservedRunningTime="2026-02-03 06:03:50.548794538 +0000 UTC m=+201.131485952" Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.555248 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.555440 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.577666 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wswtm" podStartSLOduration=2.284111272 podStartE2EDuration="47.577648388s" podCreationTimestamp="2026-02-03 06:03:03 +0000 UTC" firstStartedPulling="2026-02-03 06:03:04.695845432 +0000 UTC m=+155.278536836" lastFinishedPulling="2026-02-03 06:03:49.989382538 +0000 UTC m=+200.572073952" observedRunningTime="2026-02-03 06:03:50.574564166 +0000 UTC m=+201.157255580" watchObservedRunningTime="2026-02-03 06:03:50.577648388 +0000 UTC m=+201.160339802" Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.597217 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r8kmz" podStartSLOduration=5.426874402 podStartE2EDuration="51.597201554s" podCreationTimestamp="2026-02-03 06:02:59 +0000 UTC" firstStartedPulling="2026-02-03 06:03:03.558886692 +0000 UTC m=+154.141578106" lastFinishedPulling="2026-02-03 06:03:49.729213844 +0000 UTC m=+200.311905258" observedRunningTime="2026-02-03 06:03:50.594929833 +0000 UTC m=+201.177621247" watchObservedRunningTime="2026-02-03 06:03:50.597201554 +0000 UTC m=+201.179892968" Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.622966 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-crj46" podStartSLOduration=3.529048109 podStartE2EDuration="48.622949772s" podCreationTimestamp="2026-02-03 06:03:02 +0000 UTC" firstStartedPulling="2026-02-03 06:03:04.685831392 +0000 UTC m=+155.268522806" lastFinishedPulling="2026-02-03 06:03:49.779733055 +0000 UTC m=+200.362424469" observedRunningTime="2026-02-03 06:03:50.620375404 +0000 UTC m=+201.203066818" watchObservedRunningTime="2026-02-03 06:03:50.622949772 +0000 UTC m=+201.205641176" Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.755119 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:50 crc kubenswrapper[4872]: I0203 06:03:50.755173 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.481190 4872 generic.go:334] "Generic (PLEG): container finished" podID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerID="b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d" exitCode=0 Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.481270 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrvtj" event={"ID":"fbc6a05e-2681-43a1-9c7a-5b20f879161a","Type":"ContainerDied","Data":"b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d"} Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.593165 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-r8kmz" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="registry-server" probeResult="failure" output=< Feb 03 06:03:51 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:03:51 crc kubenswrapper[4872]: > Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.699164 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 06:03:51 crc kubenswrapper[4872]: E0203 06:03:51.699396 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7aa53c-b385-42c1-be1c-6e87ad84c9bb" containerName="pruner" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.699414 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7aa53c-b385-42c1-be1c-6e87ad84c9bb" containerName="pruner" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.699546 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7aa53c-b385-42c1-be1c-6e87ad84c9bb" containerName="pruner" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.700053 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.702532 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.702987 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.708012 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.807513 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-prlzm" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="registry-server" probeResult="failure" output=< Feb 03 06:03:51 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:03:51 crc kubenswrapper[4872]: > Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.814105 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.814148 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-var-lock\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.814201 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.916017 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.916113 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.916141 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-var-lock\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.916227 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-var-lock\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.916260 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:51 crc kubenswrapper[4872]: I0203 06:03:51.939837 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:52 crc kubenswrapper[4872]: I0203 06:03:52.012526 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:03:52 crc kubenswrapper[4872]: I0203 06:03:52.488042 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrvtj" event={"ID":"fbc6a05e-2681-43a1-9c7a-5b20f879161a","Type":"ContainerStarted","Data":"8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4"} Feb 03 06:03:52 crc kubenswrapper[4872]: I0203 06:03:52.512437 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 06:03:52 crc kubenswrapper[4872]: I0203 06:03:52.521959 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mrvtj" podStartSLOduration=4.069700199 podStartE2EDuration="52.521942666s" podCreationTimestamp="2026-02-03 06:03:00 +0000 UTC" firstStartedPulling="2026-02-03 06:03:03.538163735 +0000 UTC m=+154.120855139" lastFinishedPulling="2026-02-03 06:03:51.990406192 +0000 UTC m=+202.573097606" observedRunningTime="2026-02-03 06:03:52.520322534 +0000 UTC m=+203.103013948" watchObservedRunningTime="2026-02-03 06:03:52.521942666 +0000 UTC m=+203.104634080" Feb 03 06:03:52 crc kubenswrapper[4872]: W0203 06:03:52.526921 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod885ce7ef_0dc0_4f65_8ae0_296be20278fc.slice/crio-d8d6c0b84d44f3af5f4ef4ad68dc18453951b0b36150d9f85588aa290f33844a WatchSource:0}: Error finding container d8d6c0b84d44f3af5f4ef4ad68dc18453951b0b36150d9f85588aa290f33844a: Status 404 returned error can't find the container with id d8d6c0b84d44f3af5f4ef4ad68dc18453951b0b36150d9f85588aa290f33844a Feb 03 06:03:52 crc kubenswrapper[4872]: I0203 06:03:52.562414 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:52 crc kubenswrapper[4872]: I0203 06:03:52.562460 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:03:52 crc kubenswrapper[4872]: I0203 06:03:52.757273 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:52 crc kubenswrapper[4872]: I0203 06:03:52.757582 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:03:53 crc kubenswrapper[4872]: I0203 06:03:53.395759 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:53 crc kubenswrapper[4872]: I0203 06:03:53.397235 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:03:53 crc kubenswrapper[4872]: I0203 06:03:53.492737 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"885ce7ef-0dc0-4f65-8ae0-296be20278fc","Type":"ContainerStarted","Data":"4d0aa141be246d38eb969f914647a923eed1778b68672de1f3f94c95bd463177"} Feb 03 06:03:53 crc kubenswrapper[4872]: I0203 06:03:53.492785 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"885ce7ef-0dc0-4f65-8ae0-296be20278fc","Type":"ContainerStarted","Data":"d8d6c0b84d44f3af5f4ef4ad68dc18453951b0b36150d9f85588aa290f33844a"} Feb 03 06:03:53 crc kubenswrapper[4872]: I0203 06:03:53.609449 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rlcjj" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="registry-server" probeResult="failure" output=< Feb 03 06:03:53 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:03:53 crc kubenswrapper[4872]: > Feb 03 06:03:53 crc kubenswrapper[4872]: I0203 06:03:53.746028 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:53 crc kubenswrapper[4872]: I0203 06:03:53.746118 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:03:53 crc kubenswrapper[4872]: I0203 06:03:53.791745 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-crj46" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="registry-server" probeResult="failure" output=< Feb 03 06:03:53 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:03:53 crc kubenswrapper[4872]: > Feb 03 06:03:54 crc kubenswrapper[4872]: I0203 06:03:54.443178 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d755m" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="registry-server" probeResult="failure" output=< Feb 03 06:03:54 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:03:54 crc kubenswrapper[4872]: > Feb 03 06:03:54 crc kubenswrapper[4872]: I0203 06:03:54.785294 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wswtm" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="registry-server" probeResult="failure" output=< Feb 03 06:03:54 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:03:54 crc kubenswrapper[4872]: > Feb 03 06:04:00 crc kubenswrapper[4872]: I0203 06:04:00.624350 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:04:00 crc kubenswrapper[4872]: I0203 06:04:00.656835 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=9.656817155 podStartE2EDuration="9.656817155s" podCreationTimestamp="2026-02-03 06:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:03:53.515668799 +0000 UTC m=+204.098360213" watchObservedRunningTime="2026-02-03 06:04:00.656817155 +0000 UTC m=+211.239508579" Feb 03 06:04:00 crc kubenswrapper[4872]: I0203 06:04:00.695081 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:04:00 crc kubenswrapper[4872]: I0203 06:04:00.800569 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:04:00 crc kubenswrapper[4872]: I0203 06:04:00.832291 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:04:00 crc kubenswrapper[4872]: I0203 06:04:00.832354 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:04:00 crc kubenswrapper[4872]: I0203 06:04:00.855470 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:04:00 crc kubenswrapper[4872]: I0203 06:04:00.881312 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:04:01 crc kubenswrapper[4872]: I0203 06:04:01.271476 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:04:01 crc kubenswrapper[4872]: I0203 06:04:01.271544 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:04:01 crc kubenswrapper[4872]: I0203 06:04:01.271601 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:04:01 crc kubenswrapper[4872]: I0203 06:04:01.272283 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:04:01 crc kubenswrapper[4872]: I0203 06:04:01.272439 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7" gracePeriod=600 Feb 03 06:04:01 crc kubenswrapper[4872]: I0203 06:04:01.546606 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7" exitCode=0 Feb 03 06:04:01 crc kubenswrapper[4872]: I0203 06:04:01.546818 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7"} Feb 03 06:04:01 crc kubenswrapper[4872]: I0203 06:04:01.618467 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:04:02 crc kubenswrapper[4872]: I0203 06:04:02.207153 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prlzm"] Feb 03 06:04:02 crc kubenswrapper[4872]: I0203 06:04:02.558250 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"2ace9d226a8cd5ac75a2cd87f022b584a0cdb93cf609db2260d7f1894dd4aabf"} Feb 03 06:04:02 crc kubenswrapper[4872]: I0203 06:04:02.559208 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prlzm" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="registry-server" containerID="cri-o://46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be" gracePeriod=2 Feb 03 06:04:02 crc kubenswrapper[4872]: I0203 06:04:02.643392 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:04:02 crc kubenswrapper[4872]: I0203 06:04:02.721279 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:04:02 crc kubenswrapper[4872]: I0203 06:04:02.801544 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:04:02 crc kubenswrapper[4872]: I0203 06:04:02.865278 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:04:02 crc kubenswrapper[4872]: I0203 06:04:02.990000 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.065116 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-catalog-content\") pod \"23276bc5-33eb-47db-8136-4576dbf6b945\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.065529 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-utilities\") pod \"23276bc5-33eb-47db-8136-4576dbf6b945\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.065563 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt9wg\" (UniqueName: \"kubernetes.io/projected/23276bc5-33eb-47db-8136-4576dbf6b945-kube-api-access-lt9wg\") pod \"23276bc5-33eb-47db-8136-4576dbf6b945\" (UID: \"23276bc5-33eb-47db-8136-4576dbf6b945\") " Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.066611 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-utilities" (OuterVolumeSpecName: "utilities") pod "23276bc5-33eb-47db-8136-4576dbf6b945" (UID: "23276bc5-33eb-47db-8136-4576dbf6b945"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.073514 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23276bc5-33eb-47db-8136-4576dbf6b945-kube-api-access-lt9wg" (OuterVolumeSpecName: "kube-api-access-lt9wg") pod "23276bc5-33eb-47db-8136-4576dbf6b945" (UID: "23276bc5-33eb-47db-8136-4576dbf6b945"). InnerVolumeSpecName "kube-api-access-lt9wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.113346 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23276bc5-33eb-47db-8136-4576dbf6b945" (UID: "23276bc5-33eb-47db-8136-4576dbf6b945"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.166907 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.166930 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt9wg\" (UniqueName: \"kubernetes.io/projected/23276bc5-33eb-47db-8136-4576dbf6b945-kube-api-access-lt9wg\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.166941 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23276bc5-33eb-47db-8136-4576dbf6b945-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.207309 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mrvtj"] Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.464463 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.520264 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.575031 4872 generic.go:334] "Generic (PLEG): container finished" podID="23276bc5-33eb-47db-8136-4576dbf6b945" containerID="46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be" exitCode=0 Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.576679 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prlzm" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.577071 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mrvtj" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerName="registry-server" containerID="cri-o://8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4" gracePeriod=2 Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.577451 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prlzm" event={"ID":"23276bc5-33eb-47db-8136-4576dbf6b945","Type":"ContainerDied","Data":"46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be"} Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.577485 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prlzm" event={"ID":"23276bc5-33eb-47db-8136-4576dbf6b945","Type":"ContainerDied","Data":"7f30c9507c70f1c74a0b6071afa8bc73e02b4cfc8b0c0a0d84c660d58037f00d"} Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.577507 4872 scope.go:117] "RemoveContainer" containerID="46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.616511 4872 scope.go:117] "RemoveContainer" containerID="9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.624700 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prlzm"] Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.631522 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prlzm"] Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.648234 4872 scope.go:117] "RemoveContainer" containerID="f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.749638 4872 scope.go:117] "RemoveContainer" containerID="46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be" Feb 03 06:04:03 crc kubenswrapper[4872]: E0203 06:04:03.750009 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be\": container with ID starting with 46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be not found: ID does not exist" containerID="46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.750058 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be"} err="failed to get container status \"46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be\": rpc error: code = NotFound desc = could not find container \"46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be\": container with ID starting with 46197d5276d0b88ee33be700656d723858f8d8305f5a4eb0dc786505cca991be not found: ID does not exist" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.750091 4872 scope.go:117] "RemoveContainer" containerID="9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e" Feb 03 06:04:03 crc kubenswrapper[4872]: E0203 06:04:03.750407 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e\": container with ID starting with 9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e not found: ID does not exist" containerID="9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.750440 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e"} err="failed to get container status \"9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e\": rpc error: code = NotFound desc = could not find container \"9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e\": container with ID starting with 9f2fdba50bb909e9d6a6798ab5f4f507e1ea5a86acf62c14396ab0ed300cb12e not found: ID does not exist" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.750461 4872 scope.go:117] "RemoveContainer" containerID="f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593" Feb 03 06:04:03 crc kubenswrapper[4872]: E0203 06:04:03.751070 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593\": container with ID starting with f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593 not found: ID does not exist" containerID="f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.751103 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593"} err="failed to get container status \"f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593\": rpc error: code = NotFound desc = could not find container \"f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593\": container with ID starting with f5fd7a5e18d34952d602d67c03a6c637b7c4ea7622f9326c3ce04324af6ff593 not found: ID does not exist" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.846984 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:04:03 crc kubenswrapper[4872]: I0203 06:04:03.914539 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.015199 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.107838 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-catalog-content\") pod \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.107884 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-utilities\") pod \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.107924 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwj9f\" (UniqueName: \"kubernetes.io/projected/fbc6a05e-2681-43a1-9c7a-5b20f879161a-kube-api-access-bwj9f\") pod \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\" (UID: \"fbc6a05e-2681-43a1-9c7a-5b20f879161a\") " Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.109353 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-utilities" (OuterVolumeSpecName: "utilities") pod "fbc6a05e-2681-43a1-9c7a-5b20f879161a" (UID: "fbc6a05e-2681-43a1-9c7a-5b20f879161a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.113182 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc6a05e-2681-43a1-9c7a-5b20f879161a-kube-api-access-bwj9f" (OuterVolumeSpecName: "kube-api-access-bwj9f") pod "fbc6a05e-2681-43a1-9c7a-5b20f879161a" (UID: "fbc6a05e-2681-43a1-9c7a-5b20f879161a"). InnerVolumeSpecName "kube-api-access-bwj9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.131059 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" path="/var/lib/kubelet/pods/23276bc5-33eb-47db-8136-4576dbf6b945/volumes" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.166863 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbc6a05e-2681-43a1-9c7a-5b20f879161a" (UID: "fbc6a05e-2681-43a1-9c7a-5b20f879161a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.209902 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.210232 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc6a05e-2681-43a1-9c7a-5b20f879161a-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.210336 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwj9f\" (UniqueName: \"kubernetes.io/projected/fbc6a05e-2681-43a1-9c7a-5b20f879161a-kube-api-access-bwj9f\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.588435 4872 generic.go:334] "Generic (PLEG): container finished" podID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerID="8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4" exitCode=0 Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.588456 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrvtj" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.588487 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrvtj" event={"ID":"fbc6a05e-2681-43a1-9c7a-5b20f879161a","Type":"ContainerDied","Data":"8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4"} Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.591966 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrvtj" event={"ID":"fbc6a05e-2681-43a1-9c7a-5b20f879161a","Type":"ContainerDied","Data":"18bda3e2fa2abdeede2cfcc218b659e39970a0fa49c0d2b306dfa619da804993"} Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.592003 4872 scope.go:117] "RemoveContainer" containerID="8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.621638 4872 scope.go:117] "RemoveContainer" containerID="b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.651902 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mrvtj"] Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.660393 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mrvtj"] Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.669281 4872 scope.go:117] "RemoveContainer" containerID="4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.699152 4872 scope.go:117] "RemoveContainer" containerID="8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4" Feb 03 06:04:04 crc kubenswrapper[4872]: E0203 06:04:04.699933 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4\": container with ID starting with 8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4 not found: ID does not exist" containerID="8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.700010 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4"} err="failed to get container status \"8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4\": rpc error: code = NotFound desc = could not find container \"8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4\": container with ID starting with 8cb5d18db11df2666953cb58fccc918ea7b37775fd90154bef70b1614bafdea4 not found: ID does not exist" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.700196 4872 scope.go:117] "RemoveContainer" containerID="b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d" Feb 03 06:04:04 crc kubenswrapper[4872]: E0203 06:04:04.700717 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d\": container with ID starting with b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d not found: ID does not exist" containerID="b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.700811 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d"} err="failed to get container status \"b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d\": rpc error: code = NotFound desc = could not find container \"b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d\": container with ID starting with b309f30b7a8740bb85f03670313c7b3591c4e5fb6b848bafecc304a2c5e88b5d not found: ID does not exist" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.700859 4872 scope.go:117] "RemoveContainer" containerID="4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb" Feb 03 06:04:04 crc kubenswrapper[4872]: E0203 06:04:04.701444 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb\": container with ID starting with 4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb not found: ID does not exist" containerID="4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb" Feb 03 06:04:04 crc kubenswrapper[4872]: I0203 06:04:04.701481 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb"} err="failed to get container status \"4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb\": rpc error: code = NotFound desc = could not find container \"4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb\": container with ID starting with 4cf4dfa8101366fa66884274582565be1fb2f26ea87196d2beedc85361eeabbb not found: ID does not exist" Feb 03 06:04:05 crc kubenswrapper[4872]: I0203 06:04:05.604840 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crj46"] Feb 03 06:04:05 crc kubenswrapper[4872]: I0203 06:04:05.605564 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-crj46" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="registry-server" containerID="cri-o://24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d" gracePeriod=2 Feb 03 06:04:05 crc kubenswrapper[4872]: I0203 06:04:05.782740 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" podUID="46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" containerName="oauth-openshift" containerID="cri-o://07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469" gracePeriod=15 Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.050609 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.130517 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" path="/var/lib/kubelet/pods/fbc6a05e-2681-43a1-9c7a-5b20f879161a/volumes" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.137520 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-utilities\") pod \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.137557 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7qn8\" (UniqueName: \"kubernetes.io/projected/4b8fc51f-2388-428e-93c9-e6f2a1b14740-kube-api-access-r7qn8\") pod \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.137575 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-catalog-content\") pod \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\" (UID: \"4b8fc51f-2388-428e-93c9-e6f2a1b14740\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.141659 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-utilities" (OuterVolumeSpecName: "utilities") pod "4b8fc51f-2388-428e-93c9-e6f2a1b14740" (UID: "4b8fc51f-2388-428e-93c9-e6f2a1b14740"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.146095 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8fc51f-2388-428e-93c9-e6f2a1b14740-kube-api-access-r7qn8" (OuterVolumeSpecName: "kube-api-access-r7qn8") pod "4b8fc51f-2388-428e-93c9-e6f2a1b14740" (UID: "4b8fc51f-2388-428e-93c9-e6f2a1b14740"). InnerVolumeSpecName "kube-api-access-r7qn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.160768 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.166973 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b8fc51f-2388-428e-93c9-e6f2a1b14740" (UID: "4b8fc51f-2388-428e-93c9-e6f2a1b14740"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238567 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-policies\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238654 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn48x\" (UniqueName: \"kubernetes.io/projected/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-kube-api-access-xn48x\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238774 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-serving-cert\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238806 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-session\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238836 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-error\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238852 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-router-certs\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238871 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-trusted-ca-bundle\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238891 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-provider-selection\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238922 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-dir\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238938 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-ocp-branding-template\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238961 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-cliconfig\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.238977 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-idp-0-file-data\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.239007 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-login\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.239022 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-service-ca\") pod \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\" (UID: \"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538\") " Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.239267 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.239279 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7qn8\" (UniqueName: \"kubernetes.io/projected/4b8fc51f-2388-428e-93c9-e6f2a1b14740-kube-api-access-r7qn8\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.239288 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b8fc51f-2388-428e-93c9-e6f2a1b14740-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.239736 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.240596 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.240747 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.240764 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.240824 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.243003 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.244167 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.244972 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.245143 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.245160 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.245356 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.245817 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.245882 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-kube-api-access-xn48x" (OuterVolumeSpecName: "kube-api-access-xn48x") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "kube-api-access-xn48x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.247023 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" (UID: "46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340355 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340398 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340408 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340418 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340429 4872 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340438 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340450 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340459 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340470 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340480 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340489 4872 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340498 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn48x\" (UniqueName: \"kubernetes.io/projected/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-kube-api-access-xn48x\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340507 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.340515 4872 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.622133 4872 generic.go:334] "Generic (PLEG): container finished" podID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerID="24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d" exitCode=0 Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.622231 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crj46" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.623164 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crj46" event={"ID":"4b8fc51f-2388-428e-93c9-e6f2a1b14740","Type":"ContainerDied","Data":"24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d"} Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.623453 4872 scope.go:117] "RemoveContainer" containerID="24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.623410 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crj46" event={"ID":"4b8fc51f-2388-428e-93c9-e6f2a1b14740","Type":"ContainerDied","Data":"2d4bb0fe9b60319b27b9044202ec580bc4006481d1f968a9191a435359829ac2"} Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.625795 4872 generic.go:334] "Generic (PLEG): container finished" podID="46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" containerID="07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469" exitCode=0 Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.625847 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" event={"ID":"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538","Type":"ContainerDied","Data":"07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469"} Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.625862 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" event={"ID":"46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538","Type":"ContainerDied","Data":"566cda45bcf5089d99474ad2f61c6f76b164521b7a53262152559dec2e248544"} Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.625936 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v5phf" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.653191 4872 scope.go:117] "RemoveContainer" containerID="93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.684100 4872 scope.go:117] "RemoveContainer" containerID="e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.695019 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v5phf"] Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.702857 4872 scope.go:117] "RemoveContainer" containerID="24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.702974 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v5phf"] Feb 03 06:04:06 crc kubenswrapper[4872]: E0203 06:04:06.704560 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d\": container with ID starting with 24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d not found: ID does not exist" containerID="24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.704594 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d"} err="failed to get container status \"24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d\": rpc error: code = NotFound desc = could not find container \"24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d\": container with ID starting with 24c66d05aa51d9f4b245ada7d527891f38e4f5ca0501068e74713e7efea3af9d not found: ID does not exist" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.704614 4872 scope.go:117] "RemoveContainer" containerID="93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a" Feb 03 06:04:06 crc kubenswrapper[4872]: E0203 06:04:06.706132 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a\": container with ID starting with 93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a not found: ID does not exist" containerID="93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.706161 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a"} err="failed to get container status \"93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a\": rpc error: code = NotFound desc = could not find container \"93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a\": container with ID starting with 93f0754ac1b64967a6307694da36a649728fd805968ba76593931731af5b996a not found: ID does not exist" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.706175 4872 scope.go:117] "RemoveContainer" containerID="e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.706578 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crj46"] Feb 03 06:04:06 crc kubenswrapper[4872]: E0203 06:04:06.706935 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4\": container with ID starting with e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4 not found: ID does not exist" containerID="e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.706954 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4"} err="failed to get container status \"e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4\": rpc error: code = NotFound desc = could not find container \"e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4\": container with ID starting with e088bfd9c31e4662f5e023437663a3084d10ba9fdc53a681aa3a6762838eb1b4 not found: ID does not exist" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.706967 4872 scope.go:117] "RemoveContainer" containerID="07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.711334 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-crj46"] Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.737914 4872 scope.go:117] "RemoveContainer" containerID="07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469" Feb 03 06:04:06 crc kubenswrapper[4872]: E0203 06:04:06.738468 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469\": container with ID starting with 07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469 not found: ID does not exist" containerID="07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469" Feb 03 06:04:06 crc kubenswrapper[4872]: I0203 06:04:06.738505 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469"} err="failed to get container status \"07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469\": rpc error: code = NotFound desc = could not find container \"07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469\": container with ID starting with 07062642a0d7567069a57b6f058a630740838af796b2e64d9a2d882391650469 not found: ID does not exist" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607496 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d6876856f-g64z4"] Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607787 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607813 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607832 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="extract-utilities" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607844 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="extract-utilities" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607864 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="extract-content" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607875 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="extract-content" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607891 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" containerName="oauth-openshift" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607901 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" containerName="oauth-openshift" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607919 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607928 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607941 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerName="extract-utilities" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607950 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerName="extract-utilities" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607962 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="extract-utilities" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607969 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="extract-utilities" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607980 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerName="extract-content" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.607987 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerName="extract-content" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.607999 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="extract-content" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.608006 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="extract-content" Feb 03 06:04:07 crc kubenswrapper[4872]: E0203 06:04:07.608015 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.608023 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.608150 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="23276bc5-33eb-47db-8136-4576dbf6b945" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.608164 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" containerName="oauth-openshift" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.608176 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.608190 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc6a05e-2681-43a1-9c7a-5b20f879161a" containerName="registry-server" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.608775 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.612377 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.612413 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.612773 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.612914 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.613350 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.614147 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.614205 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.616561 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.616655 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.620855 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.621171 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.623009 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.636825 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.638879 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d6876856f-g64z4"] Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.649577 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.650895 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.663963 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664501 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664533 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-session\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664558 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664583 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09bfaf8e-51a7-4789-bce7-3caefe928f8a-audit-dir\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664607 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-audit-policies\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664630 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664706 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664820 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664841 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-error\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.664871 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87s8k\" (UniqueName: \"kubernetes.io/projected/09bfaf8e-51a7-4789-bce7-3caefe928f8a-kube-api-access-87s8k\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.665022 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.665071 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-login\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.665110 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767035 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767129 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767199 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767244 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-session\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767283 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767325 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09bfaf8e-51a7-4789-bce7-3caefe928f8a-audit-dir\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767363 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-audit-policies\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767401 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767440 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767513 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767547 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-error\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767596 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87s8k\" (UniqueName: \"kubernetes.io/projected/09bfaf8e-51a7-4789-bce7-3caefe928f8a-kube-api-access-87s8k\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767646 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.767694 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-login\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.769282 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.770447 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-audit-policies\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.770772 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09bfaf8e-51a7-4789-bce7-3caefe928f8a-audit-dir\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.773513 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.773523 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.773810 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-login\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.774311 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.774451 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.776638 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.777864 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-session\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.786036 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-template-error\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.786361 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.786824 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/09bfaf8e-51a7-4789-bce7-3caefe928f8a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.790811 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87s8k\" (UniqueName: \"kubernetes.io/projected/09bfaf8e-51a7-4789-bce7-3caefe928f8a-kube-api-access-87s8k\") pod \"oauth-openshift-7d6876856f-g64z4\" (UID: \"09bfaf8e-51a7-4789-bce7-3caefe928f8a\") " pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:07 crc kubenswrapper[4872]: I0203 06:04:07.927268 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.004290 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wswtm"] Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.004636 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wswtm" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="registry-server" containerID="cri-o://dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76" gracePeriod=2 Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.146091 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538" path="/var/lib/kubelet/pods/46df8d0a-c8f5-4e7d-a8bd-dbe67bb51538/volumes" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.146806 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8fc51f-2388-428e-93c9-e6f2a1b14740" path="/var/lib/kubelet/pods/4b8fc51f-2388-428e-93c9-e6f2a1b14740/volumes" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.175287 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d6876856f-g64z4"] Feb 03 06:04:08 crc kubenswrapper[4872]: W0203 06:04:08.179937 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09bfaf8e_51a7_4789_bce7_3caefe928f8a.slice/crio-b2304baadda748b6db04bedc2b7cfb9b18fc48ec73164a140b90e563a569b673 WatchSource:0}: Error finding container b2304baadda748b6db04bedc2b7cfb9b18fc48ec73164a140b90e563a569b673: Status 404 returned error can't find the container with id b2304baadda748b6db04bedc2b7cfb9b18fc48ec73164a140b90e563a569b673 Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.359369 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.482274 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6g84\" (UniqueName: \"kubernetes.io/projected/5d42681c-20a8-4b9c-843e-89102105dcfe-kube-api-access-z6g84\") pod \"5d42681c-20a8-4b9c-843e-89102105dcfe\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.482371 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-utilities\") pod \"5d42681c-20a8-4b9c-843e-89102105dcfe\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.482429 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-catalog-content\") pod \"5d42681c-20a8-4b9c-843e-89102105dcfe\" (UID: \"5d42681c-20a8-4b9c-843e-89102105dcfe\") " Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.483304 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-utilities" (OuterVolumeSpecName: "utilities") pod "5d42681c-20a8-4b9c-843e-89102105dcfe" (UID: "5d42681c-20a8-4b9c-843e-89102105dcfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.486267 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d42681c-20a8-4b9c-843e-89102105dcfe-kube-api-access-z6g84" (OuterVolumeSpecName: "kube-api-access-z6g84") pod "5d42681c-20a8-4b9c-843e-89102105dcfe" (UID: "5d42681c-20a8-4b9c-843e-89102105dcfe"). InnerVolumeSpecName "kube-api-access-z6g84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.583455 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6g84\" (UniqueName: \"kubernetes.io/projected/5d42681c-20a8-4b9c-843e-89102105dcfe-kube-api-access-z6g84\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.583504 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.590758 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d42681c-20a8-4b9c-843e-89102105dcfe" (UID: "5d42681c-20a8-4b9c-843e-89102105dcfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.642539 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" event={"ID":"09bfaf8e-51a7-4789-bce7-3caefe928f8a","Type":"ContainerStarted","Data":"f8d612a5ea3cd5f7f8f84b1ced0f0faa33a5693af8706ccf15d7cd22f0cbc11e"} Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.642626 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" event={"ID":"09bfaf8e-51a7-4789-bce7-3caefe928f8a","Type":"ContainerStarted","Data":"b2304baadda748b6db04bedc2b7cfb9b18fc48ec73164a140b90e563a569b673"} Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.642667 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.645512 4872 generic.go:334] "Generic (PLEG): container finished" podID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerID="dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76" exitCode=0 Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.645577 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wswtm" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.645588 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wswtm" event={"ID":"5d42681c-20a8-4b9c-843e-89102105dcfe","Type":"ContainerDied","Data":"dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76"} Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.645667 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wswtm" event={"ID":"5d42681c-20a8-4b9c-843e-89102105dcfe","Type":"ContainerDied","Data":"64320244273a6cde7b9f990152934660a863587e71613a327589e9f8f86262e8"} Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.645736 4872 scope.go:117] "RemoveContainer" containerID="dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.666943 4872 scope.go:117] "RemoveContainer" containerID="dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.671376 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" podStartSLOduration=28.671356113999998 podStartE2EDuration="28.671356114s" podCreationTimestamp="2026-02-03 06:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:04:08.667186604 +0000 UTC m=+219.249878018" watchObservedRunningTime="2026-02-03 06:04:08.671356114 +0000 UTC m=+219.254047538" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.693763 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d42681c-20a8-4b9c-843e-89102105dcfe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.695786 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wswtm"] Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.700857 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wswtm"] Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.701210 4872 scope.go:117] "RemoveContainer" containerID="e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.721627 4872 scope.go:117] "RemoveContainer" containerID="dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76" Feb 03 06:04:08 crc kubenswrapper[4872]: E0203 06:04:08.722150 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76\": container with ID starting with dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76 not found: ID does not exist" containerID="dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.722192 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76"} err="failed to get container status \"dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76\": rpc error: code = NotFound desc = could not find container \"dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76\": container with ID starting with dfb0484a4e18820233c6fe01e17f71922ce3b9c08e6a75c970de2f152db5dc76 not found: ID does not exist" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.722219 4872 scope.go:117] "RemoveContainer" containerID="dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8" Feb 03 06:04:08 crc kubenswrapper[4872]: E0203 06:04:08.723388 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8\": container with ID starting with dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8 not found: ID does not exist" containerID="dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.723417 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8"} err="failed to get container status \"dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8\": rpc error: code = NotFound desc = could not find container \"dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8\": container with ID starting with dd4c85219fb396d4ba0a3438b70bfee02b63231d09e7f19e9002e029e1d036e8 not found: ID does not exist" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.723434 4872 scope.go:117] "RemoveContainer" containerID="e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098" Feb 03 06:04:08 crc kubenswrapper[4872]: E0203 06:04:08.723773 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098\": container with ID starting with e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098 not found: ID does not exist" containerID="e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.723813 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098"} err="failed to get container status \"e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098\": rpc error: code = NotFound desc = could not find container \"e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098\": container with ID starting with e7e0e6c7b7fa527f815b69cabb7c96f9d9ee602169c1ea403f7efaeea83aa098 not found: ID does not exist" Feb 03 06:04:08 crc kubenswrapper[4872]: I0203 06:04:08.739901 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d6876856f-g64z4" Feb 03 06:04:10 crc kubenswrapper[4872]: I0203 06:04:10.129036 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" path="/var/lib/kubelet/pods/5d42681c-20a8-4b9c-843e-89102105dcfe/volumes" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.741388 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8kmz"] Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.742147 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r8kmz" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="registry-server" containerID="cri-o://865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d" gracePeriod=30 Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.753661 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8g49"] Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.754097 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c8g49" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerName="registry-server" containerID="cri-o://d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6" gracePeriod=30 Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.764712 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kf6f5"] Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.764898 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" podUID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" containerName="marketplace-operator" containerID="cri-o://77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78" gracePeriod=30 Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.778198 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcjj"] Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.778434 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rlcjj" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="registry-server" containerID="cri-o://ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36" gracePeriod=30 Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.793983 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d755m"] Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.794228 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d755m" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="registry-server" containerID="cri-o://06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08" gracePeriod=30 Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.797584 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sj6w"] Feb 03 06:04:26 crc kubenswrapper[4872]: E0203 06:04:26.800081 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="registry-server" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.800191 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="registry-server" Feb 03 06:04:26 crc kubenswrapper[4872]: E0203 06:04:26.800253 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="extract-utilities" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.800315 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="extract-utilities" Feb 03 06:04:26 crc kubenswrapper[4872]: E0203 06:04:26.800380 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="extract-content" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.800438 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="extract-content" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.800583 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d42681c-20a8-4b9c-843e-89102105dcfe" containerName="registry-server" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.801497 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.808258 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sj6w"] Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.938541 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fnn8\" (UniqueName: \"kubernetes.io/projected/c4bf20b9-bd1d-4d8b-8547-500924c14af5-kube-api-access-8fnn8\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.938640 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4bf20b9-bd1d-4d8b-8547-500924c14af5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:26 crc kubenswrapper[4872]: I0203 06:04:26.938732 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4bf20b9-bd1d-4d8b-8547-500924c14af5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.039368 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fnn8\" (UniqueName: \"kubernetes.io/projected/c4bf20b9-bd1d-4d8b-8547-500924c14af5-kube-api-access-8fnn8\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.039418 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4bf20b9-bd1d-4d8b-8547-500924c14af5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.039483 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4bf20b9-bd1d-4d8b-8547-500924c14af5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.040574 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4bf20b9-bd1d-4d8b-8547-500924c14af5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.045345 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4bf20b9-bd1d-4d8b-8547-500924c14af5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.066305 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fnn8\" (UniqueName: \"kubernetes.io/projected/c4bf20b9-bd1d-4d8b-8547-500924c14af5-kube-api-access-8fnn8\") pod \"marketplace-operator-79b997595-7sj6w\" (UID: \"c4bf20b9-bd1d-4d8b-8547-500924c14af5\") " pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.080536 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.094336 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.236387 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.245121 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg6kw\" (UniqueName: \"kubernetes.io/projected/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-kube-api-access-cg6kw\") pod \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.245230 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-catalog-content\") pod \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.245281 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-utilities\") pod \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\" (UID: \"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.257958 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-utilities" (OuterVolumeSpecName: "utilities") pod "51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" (UID: "51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.260062 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.266373 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.269670 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-kube-api-access-cg6kw" (OuterVolumeSpecName: "kube-api-access-cg6kw") pod "51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" (UID: "51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc"). InnerVolumeSpecName "kube-api-access-cg6kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.283088 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.346454 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-operator-metrics\") pod \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.346497 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs475\" (UniqueName: \"kubernetes.io/projected/284c00ef-fffa-4fa7-8545-91f490740333-kube-api-access-rs475\") pod \"284c00ef-fffa-4fa7-8545-91f490740333\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.346520 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-catalog-content\") pod \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.346537 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-utilities\") pod \"284c00ef-fffa-4fa7-8545-91f490740333\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.346568 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvvhk\" (UniqueName: \"kubernetes.io/projected/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-kube-api-access-kvvhk\") pod \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.347382 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-utilities" (OuterVolumeSpecName: "utilities") pod "284c00ef-fffa-4fa7-8545-91f490740333" (UID: "284c00ef-fffa-4fa7-8545-91f490740333"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.348119 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-trusted-ca\") pod \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.348147 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnnkz\" (UniqueName: \"kubernetes.io/projected/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-kube-api-access-vnnkz\") pod \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\" (UID: \"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.348168 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-catalog-content\") pod \"284c00ef-fffa-4fa7-8545-91f490740333\" (UID: \"284c00ef-fffa-4fa7-8545-91f490740333\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.348192 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-utilities\") pod \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\" (UID: \"3270edf2-a9cf-47e7-8e06-90f0f97fb13f\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.348430 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.348441 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.348471 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg6kw\" (UniqueName: \"kubernetes.io/projected/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-kube-api-access-cg6kw\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.349617 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" (UID: "fb89a0e7-ca7f-4f14-bad9-1b8b054328ed"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.352900 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" (UID: "fb89a0e7-ca7f-4f14-bad9-1b8b054328ed"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.356536 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-utilities" (OuterVolumeSpecName: "utilities") pod "3270edf2-a9cf-47e7-8e06-90f0f97fb13f" (UID: "3270edf2-a9cf-47e7-8e06-90f0f97fb13f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.358391 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-kube-api-access-kvvhk" (OuterVolumeSpecName: "kube-api-access-kvvhk") pod "3270edf2-a9cf-47e7-8e06-90f0f97fb13f" (UID: "3270edf2-a9cf-47e7-8e06-90f0f97fb13f"). InnerVolumeSpecName "kube-api-access-kvvhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.359042 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-kube-api-access-vnnkz" (OuterVolumeSpecName: "kube-api-access-vnnkz") pod "fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" (UID: "fb89a0e7-ca7f-4f14-bad9-1b8b054328ed"). InnerVolumeSpecName "kube-api-access-vnnkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.362644 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284c00ef-fffa-4fa7-8545-91f490740333-kube-api-access-rs475" (OuterVolumeSpecName: "kube-api-access-rs475") pod "284c00ef-fffa-4fa7-8545-91f490740333" (UID: "284c00ef-fffa-4fa7-8545-91f490740333"). InnerVolumeSpecName "kube-api-access-rs475". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.387497 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" (UID: "51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.390976 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3270edf2-a9cf-47e7-8e06-90f0f97fb13f" (UID: "3270edf2-a9cf-47e7-8e06-90f0f97fb13f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.449272 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdwrt\" (UniqueName: \"kubernetes.io/projected/176a4bdd-a364-4676-bd0f-2a7d286fdab9-kube-api-access-kdwrt\") pod \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.449363 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-utilities\") pod \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.449626 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-catalog-content\") pod \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\" (UID: \"176a4bdd-a364-4676-bd0f-2a7d286fdab9\") " Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.449981 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.450080 4872 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.450156 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs475\" (UniqueName: \"kubernetes.io/projected/284c00ef-fffa-4fa7-8545-91f490740333-kube-api-access-rs475\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.450376 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.450449 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvvhk\" (UniqueName: \"kubernetes.io/projected/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-kube-api-access-kvvhk\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.450528 4872 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.450601 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnnkz\" (UniqueName: \"kubernetes.io/projected/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed-kube-api-access-vnnkz\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.450710 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3270edf2-a9cf-47e7-8e06-90f0f97fb13f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.450763 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-utilities" (OuterVolumeSpecName: "utilities") pod "176a4bdd-a364-4676-bd0f-2a7d286fdab9" (UID: "176a4bdd-a364-4676-bd0f-2a7d286fdab9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.454177 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176a4bdd-a364-4676-bd0f-2a7d286fdab9-kube-api-access-kdwrt" (OuterVolumeSpecName: "kube-api-access-kdwrt") pod "176a4bdd-a364-4676-bd0f-2a7d286fdab9" (UID: "176a4bdd-a364-4676-bd0f-2a7d286fdab9"). InnerVolumeSpecName "kube-api-access-kdwrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.476866 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "176a4bdd-a364-4676-bd0f-2a7d286fdab9" (UID: "176a4bdd-a364-4676-bd0f-2a7d286fdab9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.479791 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "284c00ef-fffa-4fa7-8545-91f490740333" (UID: "284c00ef-fffa-4fa7-8545-91f490740333"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.552074 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdwrt\" (UniqueName: \"kubernetes.io/projected/176a4bdd-a364-4676-bd0f-2a7d286fdab9-kube-api-access-kdwrt\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.552100 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.552112 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/284c00ef-fffa-4fa7-8545-91f490740333-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.552121 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176a4bdd-a364-4676-bd0f-2a7d286fdab9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.623395 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7sj6w"] Feb 03 06:04:27 crc kubenswrapper[4872]: W0203 06:04:27.629454 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4bf20b9_bd1d_4d8b_8547_500924c14af5.slice/crio-3bf40208e16e7a37529bf6bce1a7a4acd589cb91fb54b13d8314442057d0b12f WatchSource:0}: Error finding container 3bf40208e16e7a37529bf6bce1a7a4acd589cb91fb54b13d8314442057d0b12f: Status 404 returned error can't find the container with id 3bf40208e16e7a37529bf6bce1a7a4acd589cb91fb54b13d8314442057d0b12f Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.771951 4872 generic.go:334] "Generic (PLEG): container finished" podID="284c00ef-fffa-4fa7-8545-91f490740333" containerID="06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08" exitCode=0 Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.771989 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d755m" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.772006 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d755m" event={"ID":"284c00ef-fffa-4fa7-8545-91f490740333","Type":"ContainerDied","Data":"06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.772781 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d755m" event={"ID":"284c00ef-fffa-4fa7-8545-91f490740333","Type":"ContainerDied","Data":"874a035a16c9bb430d29fbb80dd005db94e2d5f3875067dd8fb03acb13b1df60"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.772842 4872 scope.go:117] "RemoveContainer" containerID="06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.780089 4872 generic.go:334] "Generic (PLEG): container finished" podID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" containerID="77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78" exitCode=0 Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.780253 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" event={"ID":"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed","Type":"ContainerDied","Data":"77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.780336 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" event={"ID":"fb89a0e7-ca7f-4f14-bad9-1b8b054328ed","Type":"ContainerDied","Data":"c38a9a2b964f6ca9bf835292730c0facb4289bd03f7a3c14d4bfd0625e078dcc"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.780472 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kf6f5" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.797415 4872 generic.go:334] "Generic (PLEG): container finished" podID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerID="ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36" exitCode=0 Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.797493 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcjj" event={"ID":"176a4bdd-a364-4676-bd0f-2a7d286fdab9","Type":"ContainerDied","Data":"ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.797521 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcjj" event={"ID":"176a4bdd-a364-4676-bd0f-2a7d286fdab9","Type":"ContainerDied","Data":"d7a2d1bd74e18399bb9780b2d1a411d47019585056782f73c7c99f93151d0b7a"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.797617 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlcjj" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.801165 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" event={"ID":"c4bf20b9-bd1d-4d8b-8547-500924c14af5","Type":"ContainerStarted","Data":"112c2f099d0bc4b3395e4e67e87537aa234d9e6c1a5438583e8df235b320e52b"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.801221 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" event={"ID":"c4bf20b9-bd1d-4d8b-8547-500924c14af5","Type":"ContainerStarted","Data":"3bf40208e16e7a37529bf6bce1a7a4acd589cb91fb54b13d8314442057d0b12f"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.801919 4872 scope.go:117] "RemoveContainer" containerID="91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.802029 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.805794 4872 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7sj6w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.805851 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" podUID="c4bf20b9-bd1d-4d8b-8547-500924c14af5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.806977 4872 generic.go:334] "Generic (PLEG): container finished" podID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerID="d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6" exitCode=0 Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.807171 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8g49" event={"ID":"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc","Type":"ContainerDied","Data":"d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.807215 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8g49" event={"ID":"51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc","Type":"ContainerDied","Data":"59df17169aee6c69d64fe96101e792d63f6bcf93b168770efd8cdad71bba133d"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.807514 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8g49" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.811896 4872 generic.go:334] "Generic (PLEG): container finished" podID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerID="865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d" exitCode=0 Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.811943 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8kmz" event={"ID":"3270edf2-a9cf-47e7-8e06-90f0f97fb13f","Type":"ContainerDied","Data":"865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.811973 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r8kmz" event={"ID":"3270edf2-a9cf-47e7-8e06-90f0f97fb13f","Type":"ContainerDied","Data":"17575f68971135f84f4e5c4db39021adc408c198a14bec27d0190e75d5a94e81"} Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.812442 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r8kmz" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.821504 4872 scope.go:117] "RemoveContainer" containerID="9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.827279 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" podStartSLOduration=1.827261477 podStartE2EDuration="1.827261477s" podCreationTimestamp="2026-02-03 06:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:04:27.821967847 +0000 UTC m=+238.404659251" watchObservedRunningTime="2026-02-03 06:04:27.827261477 +0000 UTC m=+238.409952921" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.838205 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kf6f5"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.845106 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kf6f5"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.849826 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcjj"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.852758 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcjj"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.854448 4872 scope.go:117] "RemoveContainer" containerID="06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08" Feb 03 06:04:27 crc kubenswrapper[4872]: E0203 06:04:27.855061 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08\": container with ID starting with 06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08 not found: ID does not exist" containerID="06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.855106 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08"} err="failed to get container status \"06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08\": rpc error: code = NotFound desc = could not find container \"06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08\": container with ID starting with 06644b7e826f2ce476656cd8106ac4cf449e4227760c0b4547a75604f0423e08 not found: ID does not exist" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.855136 4872 scope.go:117] "RemoveContainer" containerID="91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19" Feb 03 06:04:27 crc kubenswrapper[4872]: E0203 06:04:27.855615 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19\": container with ID starting with 91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19 not found: ID does not exist" containerID="91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.855653 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19"} err="failed to get container status \"91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19\": rpc error: code = NotFound desc = could not find container \"91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19\": container with ID starting with 91e52f3e6f19119e654ac7cc170ec95a7a599fd32758cb7d89d4d9c560d2bf19 not found: ID does not exist" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.855681 4872 scope.go:117] "RemoveContainer" containerID="9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c" Feb 03 06:04:27 crc kubenswrapper[4872]: E0203 06:04:27.856001 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c\": container with ID starting with 9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c not found: ID does not exist" containerID="9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.856070 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c"} err="failed to get container status \"9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c\": rpc error: code = NotFound desc = could not find container \"9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c\": container with ID starting with 9cfa421b2bacdd1f49c0fab049b45c6ac1bad8879f6a9b75cfcfb0f41154083c not found: ID does not exist" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.856088 4872 scope.go:117] "RemoveContainer" containerID="77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.869665 4872 scope.go:117] "RemoveContainer" containerID="77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78" Feb 03 06:04:27 crc kubenswrapper[4872]: E0203 06:04:27.870083 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78\": container with ID starting with 77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78 not found: ID does not exist" containerID="77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.870113 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78"} err="failed to get container status \"77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78\": rpc error: code = NotFound desc = could not find container \"77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78\": container with ID starting with 77600ebe35cfac3e27e238ceeac132ac024f4bd785d74dc6953b766ab7115d78 not found: ID does not exist" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.870133 4872 scope.go:117] "RemoveContainer" containerID="ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.875160 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d755m"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.878558 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d755m"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.886728 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r8kmz"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.888381 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r8kmz"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.907375 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8g49"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.909514 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c8g49"] Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.921235 4872 scope.go:117] "RemoveContainer" containerID="1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.931280 4872 scope.go:117] "RemoveContainer" containerID="558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.946326 4872 scope.go:117] "RemoveContainer" containerID="ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36" Feb 03 06:04:27 crc kubenswrapper[4872]: E0203 06:04:27.947310 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36\": container with ID starting with ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36 not found: ID does not exist" containerID="ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.947341 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36"} err="failed to get container status \"ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36\": rpc error: code = NotFound desc = could not find container \"ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36\": container with ID starting with ba756658c26b6b414b5449271d0c1bd519ebf8876530c56bac6edf299f5ffd36 not found: ID does not exist" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.947363 4872 scope.go:117] "RemoveContainer" containerID="1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265" Feb 03 06:04:27 crc kubenswrapper[4872]: E0203 06:04:27.947913 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265\": container with ID starting with 1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265 not found: ID does not exist" containerID="1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.947940 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265"} err="failed to get container status \"1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265\": rpc error: code = NotFound desc = could not find container \"1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265\": container with ID starting with 1c39b2740e8da524dd4242fcc3c2ede33debd3806ff829cf9a0855e401332265 not found: ID does not exist" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.947954 4872 scope.go:117] "RemoveContainer" containerID="558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3" Feb 03 06:04:27 crc kubenswrapper[4872]: E0203 06:04:27.948163 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3\": container with ID starting with 558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3 not found: ID does not exist" containerID="558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.948183 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3"} err="failed to get container status \"558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3\": rpc error: code = NotFound desc = could not find container \"558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3\": container with ID starting with 558509ab5e34fce27214227373b4845599f1e84b0befaa4f4f2b12416411b2f3 not found: ID does not exist" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.948195 4872 scope.go:117] "RemoveContainer" containerID="d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.962880 4872 scope.go:117] "RemoveContainer" containerID="8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317" Feb 03 06:04:27 crc kubenswrapper[4872]: I0203 06:04:27.980879 4872 scope.go:117] "RemoveContainer" containerID="e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.000173 4872 scope.go:117] "RemoveContainer" containerID="d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.001543 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6\": container with ID starting with d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6 not found: ID does not exist" containerID="d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.001571 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6"} err="failed to get container status \"d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6\": rpc error: code = NotFound desc = could not find container \"d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6\": container with ID starting with d51898428cd7882c0854dec0111abfee93beaaf87267a2ac941830d29b0f31f6 not found: ID does not exist" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.001609 4872 scope.go:117] "RemoveContainer" containerID="8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.001937 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317\": container with ID starting with 8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317 not found: ID does not exist" containerID="8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.001984 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317"} err="failed to get container status \"8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317\": rpc error: code = NotFound desc = could not find container \"8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317\": container with ID starting with 8cb34e77348461c7485fd22a624b12145e442f5695a2cfa9b489209ad8933317 not found: ID does not exist" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.001998 4872 scope.go:117] "RemoveContainer" containerID="e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.002269 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a\": container with ID starting with e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a not found: ID does not exist" containerID="e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.002309 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a"} err="failed to get container status \"e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a\": rpc error: code = NotFound desc = could not find container \"e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a\": container with ID starting with e725b004b4196f111d0ffc6fd90bb05e0f71869dc3f69945789c29b0ef8a745a not found: ID does not exist" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.002339 4872 scope.go:117] "RemoveContainer" containerID="865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.016149 4872 scope.go:117] "RemoveContainer" containerID="052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.026599 4872 scope.go:117] "RemoveContainer" containerID="ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.039642 4872 scope.go:117] "RemoveContainer" containerID="865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.039981 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d\": container with ID starting with 865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d not found: ID does not exist" containerID="865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.040100 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d"} err="failed to get container status \"865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d\": rpc error: code = NotFound desc = could not find container \"865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d\": container with ID starting with 865ee35eb2d73262177ba2d76902bec838f81a1d047d1aa93ed742a54f61f69d not found: ID does not exist" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.040122 4872 scope.go:117] "RemoveContainer" containerID="052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.040454 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d\": container with ID starting with 052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d not found: ID does not exist" containerID="052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.040493 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d"} err="failed to get container status \"052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d\": rpc error: code = NotFound desc = could not find container \"052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d\": container with ID starting with 052705160cc441c959d8d75a19a01a7214254f5877613e2208343bd49018dd9d not found: ID does not exist" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.040537 4872 scope.go:117] "RemoveContainer" containerID="ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.040847 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a\": container with ID starting with ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a not found: ID does not exist" containerID="ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.040885 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a"} err="failed to get container status \"ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a\": rpc error: code = NotFound desc = could not find container \"ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a\": container with ID starting with ca75092b6f5b9445296a907534981ea2eb87f867073553ac4e939ece62ed338a not found: ID does not exist" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.128312 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" path="/var/lib/kubelet/pods/176a4bdd-a364-4676-bd0f-2a7d286fdab9/volumes" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.128883 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284c00ef-fffa-4fa7-8545-91f490740333" path="/var/lib/kubelet/pods/284c00ef-fffa-4fa7-8545-91f490740333/volumes" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.129411 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" path="/var/lib/kubelet/pods/3270edf2-a9cf-47e7-8e06-90f0f97fb13f/volumes" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.130397 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" path="/var/lib/kubelet/pods/51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc/volumes" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.130975 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" path="/var/lib/kubelet/pods/fb89a0e7-ca7f-4f14-bad9-1b8b054328ed/volumes" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.351909 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ttvm7"] Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352133 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352145 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352155 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352161 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352168 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352175 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352184 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="extract-content" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352190 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="extract-content" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352201 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" containerName="marketplace-operator" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352206 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" containerName="marketplace-operator" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352213 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerName="extract-content" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352219 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerName="extract-content" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352226 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="extract-utilities" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352231 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="extract-utilities" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352240 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerName="extract-utilities" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352245 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerName="extract-utilities" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352254 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="extract-content" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352259 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="extract-content" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352268 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="extract-utilities" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352273 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="extract-utilities" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352282 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="extract-content" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352288 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="extract-content" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352299 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="extract-utilities" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352304 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="extract-utilities" Feb 03 06:04:28 crc kubenswrapper[4872]: E0203 06:04:28.352313 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352319 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352426 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="284c00ef-fffa-4fa7-8545-91f490740333" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352438 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="3270edf2-a9cf-47e7-8e06-90f0f97fb13f" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352444 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb89a0e7-ca7f-4f14-bad9-1b8b054328ed" containerName="marketplace-operator" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352454 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="176a4bdd-a364-4676-bd0f-2a7d286fdab9" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.352462 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c9d36d-bd3d-41a2-bf5a-bc0d8c5baffc" containerName="registry-server" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.353751 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.356141 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.362877 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttvm7"] Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.463391 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6wvc\" (UniqueName: \"kubernetes.io/projected/088135ef-2437-4cab-b009-302268e318d5-kube-api-access-t6wvc\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.463456 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088135ef-2437-4cab-b009-302268e318d5-catalog-content\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.463554 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088135ef-2437-4cab-b009-302268e318d5-utilities\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.565006 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088135ef-2437-4cab-b009-302268e318d5-catalog-content\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.565055 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088135ef-2437-4cab-b009-302268e318d5-utilities\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.565096 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6wvc\" (UniqueName: \"kubernetes.io/projected/088135ef-2437-4cab-b009-302268e318d5-kube-api-access-t6wvc\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.565600 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088135ef-2437-4cab-b009-302268e318d5-catalog-content\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.565646 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088135ef-2437-4cab-b009-302268e318d5-utilities\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.581984 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6wvc\" (UniqueName: \"kubernetes.io/projected/088135ef-2437-4cab-b009-302268e318d5-kube-api-access-t6wvc\") pod \"certified-operators-ttvm7\" (UID: \"088135ef-2437-4cab-b009-302268e318d5\") " pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.691949 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:28 crc kubenswrapper[4872]: I0203 06:04:28.822805 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7sj6w" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.049289 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttvm7"] Feb 03 06:04:29 crc kubenswrapper[4872]: W0203 06:04:29.050479 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088135ef_2437_4cab_b009_302268e318d5.slice/crio-e5d6aeee752fa56e4c5e9307203e44497ce5723e9e9f67ef75a0429249b697d9 WatchSource:0}: Error finding container e5d6aeee752fa56e4c5e9307203e44497ce5723e9e9f67ef75a0429249b697d9: Status 404 returned error can't find the container with id e5d6aeee752fa56e4c5e9307203e44497ce5723e9e9f67ef75a0429249b697d9 Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.752612 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bmlvt"] Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.754564 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.756545 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.763932 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmlvt"] Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.826963 4872 generic.go:334] "Generic (PLEG): container finished" podID="088135ef-2437-4cab-b009-302268e318d5" containerID="f577877f36f929a42fc4bea307938f7a50cd1d202651c97f766ccfe47a238d2c" exitCode=0 Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.827802 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttvm7" event={"ID":"088135ef-2437-4cab-b009-302268e318d5","Type":"ContainerDied","Data":"f577877f36f929a42fc4bea307938f7a50cd1d202651c97f766ccfe47a238d2c"} Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.827824 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttvm7" event={"ID":"088135ef-2437-4cab-b009-302268e318d5","Type":"ContainerStarted","Data":"e5d6aeee752fa56e4c5e9307203e44497ce5723e9e9f67ef75a0429249b697d9"} Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.880574 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccp72\" (UniqueName: \"kubernetes.io/projected/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-kube-api-access-ccp72\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.880625 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-utilities\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.880656 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-catalog-content\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.981605 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-catalog-content\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.981878 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccp72\" (UniqueName: \"kubernetes.io/projected/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-kube-api-access-ccp72\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.981990 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-utilities\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.982740 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-catalog-content\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:29 crc kubenswrapper[4872]: I0203 06:04:29.984331 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-utilities\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.017139 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccp72\" (UniqueName: \"kubernetes.io/projected/404e90f0-f0f9-41d9-ac4d-9aeb63770d50-kube-api-access-ccp72\") pod \"redhat-marketplace-bmlvt\" (UID: \"404e90f0-f0f9-41d9-ac4d-9aeb63770d50\") " pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.147548 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.157094 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.364112 4872 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.365157 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.367069 4872 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.367329 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302" gracePeriod=15 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.367510 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9" gracePeriod=15 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.367580 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106" gracePeriod=15 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.367636 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c" gracePeriod=15 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.367740 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037" gracePeriod=15 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368290 4872 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.368453 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368468 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.368478 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368486 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.368498 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368506 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.368523 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368532 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.368543 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368553 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.368565 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368572 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368676 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368743 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368756 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368768 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368780 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368791 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.368903 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.368914 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.441442 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.487507 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.487549 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.487573 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.487607 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.487629 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.487645 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.487661 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.487676 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588521 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588566 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588587 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588603 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588619 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588652 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588672 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588717 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588784 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588815 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588835 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588855 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588873 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588892 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588913 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.588932 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.717315 4872 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 03 06:04:30 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2" Netns:"/var/run/netns/f359f037-4b43-4544-9071-cbb72ec49614" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:30 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:30 crc kubenswrapper[4872]: > Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.717384 4872 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 03 06:04:30 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2" Netns:"/var/run/netns/f359f037-4b43-4544-9071-cbb72ec49614" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:30 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:30 crc kubenswrapper[4872]: > pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.717403 4872 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 03 06:04:30 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2" Netns:"/var/run/netns/f359f037-4b43-4544-9071-cbb72ec49614" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:30 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:30 crc kubenswrapper[4872]: > pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:30 crc kubenswrapper[4872]: E0203 06:04:30.717451 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-bmlvt_openshift-marketplace(404e90f0-f0f9-41d9-ac4d-9aeb63770d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-bmlvt_openshift-marketplace(404e90f0-f0f9-41d9-ac4d-9aeb63770d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2\\\" Netns:\\\"/var/run/netns/f359f037-4b43-4544-9071-cbb72ec49614\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=92e7b53b772000260e49cf83d11a4b36d52e9a33ff2e1ba8442ed4ef667188a2;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s\\\": dial tcp 38.102.83.246:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-bmlvt" podUID="404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.731912 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:04:30 crc kubenswrapper[4872]: W0203 06:04:30.747480 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-336bc946c3f9b6d29d6eac7d839bcb427fc830b751d9b4899aa567ee5707a408 WatchSource:0}: Error finding container 336bc946c3f9b6d29d6eac7d839bcb427fc830b751d9b4899aa567ee5707a408: Status 404 returned error can't find the container with id 336bc946c3f9b6d29d6eac7d839bcb427fc830b751d9b4899aa567ee5707a408 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.833963 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.835844 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.836823 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9" exitCode=0 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.836857 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106" exitCode=0 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.836865 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c" exitCode=0 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.836874 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037" exitCode=2 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.836892 4872 scope.go:117] "RemoveContainer" containerID="4b9db35ee3d1c705f866ac417104d9e766bf2f8ef23bbcaae488d98838394077" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.838033 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"336bc946c3f9b6d29d6eac7d839bcb427fc830b751d9b4899aa567ee5707a408"} Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.840762 4872 generic.go:334] "Generic (PLEG): container finished" podID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" containerID="4d0aa141be246d38eb969f914647a923eed1778b68672de1f3f94c95bd463177" exitCode=0 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.840782 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"885ce7ef-0dc0-4f65-8ae0-296be20278fc","Type":"ContainerDied","Data":"4d0aa141be246d38eb969f914647a923eed1778b68672de1f3f94c95bd463177"} Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.841439 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.841861 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.842044 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.843250 4872 generic.go:334] "Generic (PLEG): container finished" podID="088135ef-2437-4cab-b009-302268e318d5" containerID="9904a3a43025acd9cca68a2f72556567ed3333033345e15955d01f97b2e47466" exitCode=0 Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.843763 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttvm7" event={"ID":"088135ef-2437-4cab-b009-302268e318d5","Type":"ContainerDied","Data":"9904a3a43025acd9cca68a2f72556567ed3333033345e15955d01f97b2e47466"} Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.843849 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.844163 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.844231 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.844306 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.844674 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:30 crc kubenswrapper[4872]: I0203 06:04:30.844992 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.223087 4872 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.223508 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: E0203 06:04:31.485168 4872 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 03 06:04:31 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc" Netns:"/var/run/netns/8872fc3c-a367-4503-9f78-7a49d2072757" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:31 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:31 crc kubenswrapper[4872]: > Feb 03 06:04:31 crc kubenswrapper[4872]: E0203 06:04:31.485300 4872 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 03 06:04:31 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc" Netns:"/var/run/netns/8872fc3c-a367-4503-9f78-7a49d2072757" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:31 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:31 crc kubenswrapper[4872]: > pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:31 crc kubenswrapper[4872]: E0203 06:04:31.485334 4872 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 03 06:04:31 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc" Netns:"/var/run/netns/8872fc3c-a367-4503-9f78-7a49d2072757" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:31 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:31 crc kubenswrapper[4872]: > pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:31 crc kubenswrapper[4872]: E0203 06:04:31.485408 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-bmlvt_openshift-marketplace(404e90f0-f0f9-41d9-ac4d-9aeb63770d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-bmlvt_openshift-marketplace(404e90f0-f0f9-41d9-ac4d-9aeb63770d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc\\\" Netns:\\\"/var/run/netns/8872fc3c-a367-4503-9f78-7a49d2072757\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=b3997c50c1698e51bb4b72b41fa313e16a35197d35e51fa7cc7778cbcb3bedfc;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s\\\": dial tcp 38.102.83.246:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-bmlvt" podUID="404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.853630 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttvm7" event={"ID":"088135ef-2437-4cab-b009-302268e318d5","Type":"ContainerStarted","Data":"05fdec7ca56cfe922a229bcc7775e31c7ee82937f0d5befa67d79b28a2120712"} Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.854274 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.854723 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.855130 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.855454 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.856581 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b"} Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.856906 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.857110 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.857309 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.857563 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:31 crc kubenswrapper[4872]: I0203 06:04:31.859403 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.100971 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.101645 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.102030 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.102203 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.102346 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.208779 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-var-lock\") pod \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.208873 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kubelet-dir\") pod \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.208899 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kube-api-access\") pod \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\" (UID: \"885ce7ef-0dc0-4f65-8ae0-296be20278fc\") " Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.209632 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-var-lock" (OuterVolumeSpecName: "var-lock") pod "885ce7ef-0dc0-4f65-8ae0-296be20278fc" (UID: "885ce7ef-0dc0-4f65-8ae0-296be20278fc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.209705 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "885ce7ef-0dc0-4f65-8ae0-296be20278fc" (UID: "885ce7ef-0dc0-4f65-8ae0-296be20278fc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.218924 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "885ce7ef-0dc0-4f65-8ae0-296be20278fc" (UID: "885ce7ef-0dc0-4f65-8ae0-296be20278fc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.310750 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.310870 4872 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.310881 4872 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/885ce7ef-0dc0-4f65-8ae0-296be20278fc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.868275 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.869623 4872 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302" exitCode=0 Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.871896 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.873786 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"885ce7ef-0dc0-4f65-8ae0-296be20278fc","Type":"ContainerDied","Data":"d8d6c0b84d44f3af5f4ef4ad68dc18453951b0b36150d9f85588aa290f33844a"} Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.873825 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d6c0b84d44f3af5f4ef4ad68dc18453951b0b36150d9f85588aa290f33844a" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.883943 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.884378 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:32 crc kubenswrapper[4872]: I0203 06:04:32.884838 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.241235 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.242764 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.243843 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.244585 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.245474 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.246221 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.326366 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.326509 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.326547 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.326577 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.326648 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.326899 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.326954 4872 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.326981 4872 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.427597 4872 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.878913 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.879604 4872 scope.go:117] "RemoveContainer" containerID="8a437b2e0c86ed9afe6e178e0925e0523a7b417a502a1a74d4523f6b016566c9" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.879743 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.899088 4872 scope.go:117] "RemoveContainer" containerID="9a54a21f9e2800047e4492f12bb3d83bc1889d1966fe0c560e6dc649c4d80106" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.910256 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.911071 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.911438 4872 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.911734 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.914155 4872 scope.go:117] "RemoveContainer" containerID="77d544702e455d59cd7003848cb3867cd7563a69f1db7c6f7f614b850541e32c" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.930610 4872 scope.go:117] "RemoveContainer" containerID="f75f31c187c6e81f0f79c238a6c5aa9512ce23e0382ffa2de26d33ab9d097037" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.960022 4872 scope.go:117] "RemoveContainer" containerID="c60e5671d9d3b7f96a91dc94db6db2ce93d9a274caf1da237cce745d192a7302" Feb 03 06:04:33 crc kubenswrapper[4872]: I0203 06:04:33.984357 4872 scope.go:117] "RemoveContainer" containerID="14d4c7759cef3b0bc2ddd82df9213c4929b319dbd9b4d6b77bb181b7196585cf" Feb 03 06:04:34 crc kubenswrapper[4872]: I0203 06:04:34.129887 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 03 06:04:35 crc kubenswrapper[4872]: E0203 06:04:35.423471 4872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-ttvm7.1890a75cc3cddb39 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-ttvm7,UID:088135ef-2437-4cab-b009-302268e318d5,APIVersion:v1,ResourceVersion:29567,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 590ms (590ms including waiting). Image size: 1203410157 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 06:04:30.420482873 +0000 UTC m=+241.003174307,LastTimestamp:2026-02-03 06:04:30.420482873 +0000 UTC m=+241.003174307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.692652 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.692745 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.755622 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.757002 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.757597 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.758243 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.970417 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ttvm7" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.971127 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.971516 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:38 crc kubenswrapper[4872]: I0203 06:04:38.972161 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:39 crc kubenswrapper[4872]: E0203 06:04:39.855910 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:39 crc kubenswrapper[4872]: E0203 06:04:39.856595 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:39 crc kubenswrapper[4872]: E0203 06:04:39.857121 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:39 crc kubenswrapper[4872]: E0203 06:04:39.857465 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:39 crc kubenswrapper[4872]: E0203 06:04:39.857875 4872 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:39 crc kubenswrapper[4872]: I0203 06:04:39.857910 4872 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 03 06:04:39 crc kubenswrapper[4872]: E0203 06:04:39.858225 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Feb 03 06:04:40 crc kubenswrapper[4872]: E0203 06:04:40.059310 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Feb 03 06:04:40 crc kubenswrapper[4872]: I0203 06:04:40.132425 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:40 crc kubenswrapper[4872]: I0203 06:04:40.132973 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:40 crc kubenswrapper[4872]: I0203 06:04:40.133423 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:40 crc kubenswrapper[4872]: E0203 06:04:40.460402 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Feb 03 06:04:41 crc kubenswrapper[4872]: E0203 06:04:41.261525 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Feb 03 06:04:42 crc kubenswrapper[4872]: E0203 06:04:42.863562 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.122649 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.123409 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:43 crc kubenswrapper[4872]: E0203 06:04:43.603889 4872 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 03 06:04:43 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a" Netns:"/var/run/netns/dfefcf58-6f47-4562-b663-b3b64dc65209" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:43 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:43 crc kubenswrapper[4872]: > Feb 03 06:04:43 crc kubenswrapper[4872]: E0203 06:04:43.604296 4872 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 03 06:04:43 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a" Netns:"/var/run/netns/dfefcf58-6f47-4562-b663-b3b64dc65209" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:43 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:43 crc kubenswrapper[4872]: > pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:43 crc kubenswrapper[4872]: E0203 06:04:43.604343 4872 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 03 06:04:43 crc kubenswrapper[4872]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a" Netns:"/var/run/netns/dfefcf58-6f47-4562-b663-b3b64dc65209" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s": dial tcp 38.102.83.246:6443: connect: connection refused Feb 03 06:04:43 crc kubenswrapper[4872]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 03 06:04:43 crc kubenswrapper[4872]: > pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:43 crc kubenswrapper[4872]: E0203 06:04:43.604425 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-bmlvt_openshift-marketplace(404e90f0-f0f9-41d9-ac4d-9aeb63770d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-bmlvt_openshift-marketplace(404e90f0-f0f9-41d9-ac4d-9aeb63770d50)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-bmlvt_openshift-marketplace_404e90f0-f0f9-41d9-ac4d-9aeb63770d50_0(a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a): error adding pod openshift-marketplace_redhat-marketplace-bmlvt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a\\\" Netns:\\\"/var/run/netns/dfefcf58-6f47-4562-b663-b3b64dc65209\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-bmlvt;K8S_POD_INFRA_CONTAINER_ID=a9f1d3354857b234772e554e756a13628940bbcd804324627e0d981286d0e52a;K8S_POD_UID=404e90f0-f0f9-41d9-ac4d-9aeb63770d50\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-bmlvt] networking: Multus: [openshift-marketplace/redhat-marketplace-bmlvt/404e90f0-f0f9-41d9-ac4d-9aeb63770d50]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-bmlvt in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-bmlvt?timeout=1m0s\\\": dial tcp 38.102.83.246:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-bmlvt" podUID="404e90f0-f0f9-41d9-ac4d-9aeb63770d50" Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.942009 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.942123 4872 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0" exitCode=1 Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.942199 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0"} Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.943225 4872 scope.go:117] "RemoveContainer" containerID="bc8464da3f2819a4d0b0120ab113f0c91d0bac7880d14d0310a6db6e6b992ff0" Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.943522 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.944155 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.944763 4872 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:43 crc kubenswrapper[4872]: I0203 06:04:43.945252 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:44 crc kubenswrapper[4872]: E0203 06:04:44.592052 4872 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-ttvm7.1890a75cc3cddb39 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-ttvm7,UID:088135ef-2437-4cab-b009-302268e318d5,APIVersion:v1,ResourceVersion:29567,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 590ms (590ms including waiting). Image size: 1203410157 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 06:04:30.420482873 +0000 UTC m=+241.003174307,LastTimestamp:2026-02-03 06:04:30.420482873 +0000 UTC m=+241.003174307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 06:04:44 crc kubenswrapper[4872]: I0203 06:04:44.954000 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 06:04:44 crc kubenswrapper[4872]: I0203 06:04:44.954093 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0550b5efd89d70690a7f3478ead31adf72a01f4cca57ac38150780686539aae0"} Feb 03 06:04:44 crc kubenswrapper[4872]: I0203 06:04:44.955293 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:44 crc kubenswrapper[4872]: I0203 06:04:44.955947 4872 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:44 crc kubenswrapper[4872]: I0203 06:04:44.956504 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:44 crc kubenswrapper[4872]: I0203 06:04:44.956983 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.122749 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.124896 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.125394 4872 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.126384 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.126921 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.148427 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.148908 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:45 crc kubenswrapper[4872]: E0203 06:04:45.149759 4872 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.150511 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.963971 4872 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1a91452ac7638d0f77298a1960397e5ab7ff35cec38a41f13c234fd3358b99f8" exitCode=0 Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.964033 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1a91452ac7638d0f77298a1960397e5ab7ff35cec38a41f13c234fd3358b99f8"} Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.964075 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ae4a80b1d0229df002fce1709d6b5fd062263453afc5a9e2a4fd1e1321a3da4b"} Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.964457 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.964503 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:45 crc kubenswrapper[4872]: E0203 06:04:45.965096 4872 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.965259 4872 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.966325 4872 status_manager.go:851] "Failed to get status for pod" podUID="088135ef-2437-4cab-b009-302268e318d5" pod="openshift-marketplace/certified-operators-ttvm7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ttvm7\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.967016 4872 status_manager.go:851] "Failed to get status for pod" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:45 crc kubenswrapper[4872]: I0203 06:04:45.967543 4872 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Feb 03 06:04:46 crc kubenswrapper[4872]: E0203 06:04:46.066066 4872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="6.4s" Feb 03 06:04:46 crc kubenswrapper[4872]: I0203 06:04:46.973535 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fea249630736e0555a129918cd988c8ccf87762f639c1f52591cd97cafe48638"} Feb 03 06:04:46 crc kubenswrapper[4872]: I0203 06:04:46.973851 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e05e1a94392a4926f4763c17e3c8fd3dee71e2b5d8208a2fe76d95628b0c338"} Feb 03 06:04:46 crc kubenswrapper[4872]: I0203 06:04:46.973861 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5697308d227f23e2f5f2281f185d568dc22c62cf660737f6b333580df1cb6e1b"} Feb 03 06:04:47 crc kubenswrapper[4872]: I0203 06:04:47.982103 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"74860becb3e501c401ca1ad67dd7e1c7dbb2bcfa83e3022fc86785fb423bfa0d"} Feb 03 06:04:47 crc kubenswrapper[4872]: I0203 06:04:47.982323 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:47 crc kubenswrapper[4872]: I0203 06:04:47.982334 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"75dcb24d07f9e18ec5d034a23ba3a91bcaba1ca81b2b7b6033a9678379d1fc7a"} Feb 03 06:04:47 crc kubenswrapper[4872]: I0203 06:04:47.982433 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:47 crc kubenswrapper[4872]: I0203 06:04:47.982473 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:49 crc kubenswrapper[4872]: I0203 06:04:49.240196 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:04:49 crc kubenswrapper[4872]: I0203 06:04:49.251825 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:04:49 crc kubenswrapper[4872]: I0203 06:04:49.426971 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:04:50 crc kubenswrapper[4872]: I0203 06:04:50.150748 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:50 crc kubenswrapper[4872]: I0203 06:04:50.150803 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:50 crc kubenswrapper[4872]: I0203 06:04:50.160201 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:52 crc kubenswrapper[4872]: I0203 06:04:52.991942 4872 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:53 crc kubenswrapper[4872]: I0203 06:04:53.127700 4872 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="25acbe40-fd4e-48e8-a35d-553a05c1d7af" Feb 03 06:04:54 crc kubenswrapper[4872]: I0203 06:04:54.013362 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:54 crc kubenswrapper[4872]: I0203 06:04:54.013405 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:54 crc kubenswrapper[4872]: I0203 06:04:54.017784 4872 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="25acbe40-fd4e-48e8-a35d-553a05c1d7af" Feb 03 06:04:54 crc kubenswrapper[4872]: I0203 06:04:54.018035 4872 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://5697308d227f23e2f5f2281f185d568dc22c62cf660737f6b333580df1cb6e1b" Feb 03 06:04:54 crc kubenswrapper[4872]: I0203 06:04:54.018061 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:04:55 crc kubenswrapper[4872]: I0203 06:04:55.021127 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:55 crc kubenswrapper[4872]: I0203 06:04:55.021473 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:04:55 crc kubenswrapper[4872]: I0203 06:04:55.025554 4872 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="25acbe40-fd4e-48e8-a35d-553a05c1d7af" Feb 03 06:04:58 crc kubenswrapper[4872]: I0203 06:04:58.122472 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:58 crc kubenswrapper[4872]: I0203 06:04:58.123455 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:04:59 crc kubenswrapper[4872]: I0203 06:04:59.043509 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmlvt" event={"ID":"404e90f0-f0f9-41d9-ac4d-9aeb63770d50","Type":"ContainerStarted","Data":"343245bb92899fa700fa7cfe4fa07327d39bd9a8c41498d6506fcd22c00d6986"} Feb 03 06:04:59 crc kubenswrapper[4872]: I0203 06:04:59.429660 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 06:05:00 crc kubenswrapper[4872]: I0203 06:05:00.047795 4872 generic.go:334] "Generic (PLEG): container finished" podID="404e90f0-f0f9-41d9-ac4d-9aeb63770d50" containerID="e20622a64609c0c05b5b59c3833c15990fd30f5c036c91e44eafa70e571fd893" exitCode=0 Feb 03 06:05:00 crc kubenswrapper[4872]: I0203 06:05:00.047964 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmlvt" event={"ID":"404e90f0-f0f9-41d9-ac4d-9aeb63770d50","Type":"ContainerDied","Data":"e20622a64609c0c05b5b59c3833c15990fd30f5c036c91e44eafa70e571fd893"} Feb 03 06:05:01 crc kubenswrapper[4872]: I0203 06:05:01.058045 4872 generic.go:334] "Generic (PLEG): container finished" podID="404e90f0-f0f9-41d9-ac4d-9aeb63770d50" containerID="11960a20a0911e15c140979fb5e328955c00fb26ce3efbfd16d5fe2051da05cf" exitCode=0 Feb 03 06:05:01 crc kubenswrapper[4872]: I0203 06:05:01.058535 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmlvt" event={"ID":"404e90f0-f0f9-41d9-ac4d-9aeb63770d50","Type":"ContainerDied","Data":"11960a20a0911e15c140979fb5e328955c00fb26ce3efbfd16d5fe2051da05cf"} Feb 03 06:05:01 crc kubenswrapper[4872]: I0203 06:05:01.996380 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 06:05:02 crc kubenswrapper[4872]: I0203 06:05:02.068431 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmlvt" event={"ID":"404e90f0-f0f9-41d9-ac4d-9aeb63770d50","Type":"ContainerStarted","Data":"d7a76cd19f2fff56aa12d39e6c9cc414dcb126c91e2ba6a13b836cf4b0c059c3"} Feb 03 06:05:02 crc kubenswrapper[4872]: I0203 06:05:02.701007 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 06:05:02 crc kubenswrapper[4872]: I0203 06:05:02.734025 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 06:05:02 crc kubenswrapper[4872]: I0203 06:05:02.860614 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 06:05:03 crc kubenswrapper[4872]: I0203 06:05:03.247677 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 06:05:03 crc kubenswrapper[4872]: I0203 06:05:03.364522 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 06:05:03 crc kubenswrapper[4872]: I0203 06:05:03.521430 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 06:05:03 crc kubenswrapper[4872]: I0203 06:05:03.565313 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 06:05:03 crc kubenswrapper[4872]: I0203 06:05:03.631749 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 06:05:03 crc kubenswrapper[4872]: I0203 06:05:03.698504 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 06:05:03 crc kubenswrapper[4872]: I0203 06:05:03.783112 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.029830 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.199354 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.258360 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.347540 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.504929 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.648018 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.683905 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.711596 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.770550 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.783015 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.837184 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.874190 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.957319 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 06:05:04 crc kubenswrapper[4872]: I0203 06:05:04.987203 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.024981 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.145187 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.358061 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.417322 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.532005 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.648405 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.648654 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.682605 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.893526 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 06:05:05 crc kubenswrapper[4872]: I0203 06:05:05.935050 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.192351 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.222120 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.296957 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.298747 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.379352 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.535403 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.603399 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.672351 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.695526 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.775784 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.847314 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.902105 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.910132 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.945822 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.949528 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.954940 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.966243 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 06:05:06 crc kubenswrapper[4872]: I0203 06:05:06.972495 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.043384 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.133946 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.209435 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.361184 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.367242 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.367378 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.403387 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.429667 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.440865 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.461219 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.484017 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.632447 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.663539 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.694100 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.699637 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.719608 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 06:05:07 crc kubenswrapper[4872]: I0203 06:05:07.753191 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.028652 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.063593 4872 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.088721 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.097591 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.125299 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.140454 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.153279 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.213630 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.273974 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.289986 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.300286 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.349150 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.408953 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.436698 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.534611 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.581547 4872 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.582042 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.582023278 podStartE2EDuration="38.582023278s" podCreationTimestamp="2026-02-03 06:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:04:53.04853605 +0000 UTC m=+263.631227464" watchObservedRunningTime="2026-02-03 06:05:08.582023278 +0000 UTC m=+279.164714692" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.583967 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bmlvt" podStartSLOduration=38.184088117 podStartE2EDuration="39.58396052s" podCreationTimestamp="2026-02-03 06:04:29 +0000 UTC" firstStartedPulling="2026-02-03 06:05:00.049968265 +0000 UTC m=+270.632659679" lastFinishedPulling="2026-02-03 06:05:01.449840658 +0000 UTC m=+272.032532082" observedRunningTime="2026-02-03 06:05:02.090642402 +0000 UTC m=+272.673333816" watchObservedRunningTime="2026-02-03 06:05:08.58396052 +0000 UTC m=+279.166651934" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.585101 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ttvm7" podStartSLOduration=38.978347774 podStartE2EDuration="40.585092169s" podCreationTimestamp="2026-02-03 06:04:28 +0000 UTC" firstStartedPulling="2026-02-03 06:04:29.830018645 +0000 UTC m=+240.412710059" lastFinishedPulling="2026-02-03 06:04:31.43676304 +0000 UTC m=+242.019454454" observedRunningTime="2026-02-03 06:04:53.062215921 +0000 UTC m=+263.644907345" watchObservedRunningTime="2026-02-03 06:05:08.585092169 +0000 UTC m=+279.167783593" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.585814 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.585861 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.585889 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmlvt"] Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.586254 4872 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.586273 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ba6759dc-5609-4243-988a-125ffde7ec9e" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.589621 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.602215 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.60220066 podStartE2EDuration="16.60220066s" podCreationTimestamp="2026-02-03 06:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:05:08.600651179 +0000 UTC m=+279.183342593" watchObservedRunningTime="2026-02-03 06:05:08.60220066 +0000 UTC m=+279.184892074" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.606134 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.645103 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.658625 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.723080 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.728024 4872 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.743831 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.826717 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.861679 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 06:05:08 crc kubenswrapper[4872]: I0203 06:05:08.899902 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.055200 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.071060 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.072636 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.120649 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.246890 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.269392 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.335910 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.365404 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.403736 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.442257 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.480161 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.493809 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.730104 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.857548 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.885255 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.900203 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.919760 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.965340 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.982829 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 06:05:09 crc kubenswrapper[4872]: I0203 06:05:09.994311 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.158032 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.158081 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.190883 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.217270 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.230859 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.259869 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.273614 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.425089 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.468581 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.544331 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.634751 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.756117 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.865236 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.903626 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.940233 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.953674 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 06:05:10 crc kubenswrapper[4872]: I0203 06:05:10.970918 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.069522 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.100342 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.108426 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.170908 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bmlvt" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.181144 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.197671 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.251041 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.257269 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.272638 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.371394 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.401324 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.499886 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.503120 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.518750 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.592724 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.594102 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.652608 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.661308 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 06:05:11 crc kubenswrapper[4872]: I0203 06:05:11.942734 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.023474 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.156734 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.327766 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.372980 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.379759 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.398155 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.412747 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.546345 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.553506 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.630218 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.638664 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.785849 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.824246 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.898381 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.917608 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 06:05:12 crc kubenswrapper[4872]: I0203 06:05:12.971089 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.024007 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.160498 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.199549 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.216588 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.264327 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.296413 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.300976 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.302761 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.341754 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.422874 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.436883 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.442766 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.463263 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.632833 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.756759 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.770926 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.816190 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.821539 4872 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.865069 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.894090 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.914053 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 06:05:13 crc kubenswrapper[4872]: I0203 06:05:13.926432 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.010131 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.011627 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.014679 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.048084 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.068999 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.154610 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.174846 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.199221 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.217346 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.236978 4872 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.312461 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.358861 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.373766 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.373961 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.381898 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.422751 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.428290 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.434361 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.458682 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.512103 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.534061 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.640286 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.650353 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.730976 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.738293 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 06:05:14 crc kubenswrapper[4872]: I0203 06:05:14.876424 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 06:05:15 crc kubenswrapper[4872]: I0203 06:05:15.110992 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 06:05:15 crc kubenswrapper[4872]: I0203 06:05:15.129844 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 06:05:15 crc kubenswrapper[4872]: I0203 06:05:15.157932 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 06:05:15 crc kubenswrapper[4872]: I0203 06:05:15.303387 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 06:05:15 crc kubenswrapper[4872]: I0203 06:05:15.375072 4872 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 06:05:15 crc kubenswrapper[4872]: I0203 06:05:15.375280 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b" gracePeriod=5 Feb 03 06:05:15 crc kubenswrapper[4872]: I0203 06:05:15.619325 4872 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.122879 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.124224 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.124624 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.125125 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.260360 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.415242 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.463093 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.477595 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.519768 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.524525 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.901117 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.929469 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 06:05:16 crc kubenswrapper[4872]: I0203 06:05:16.931733 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 06:05:17 crc kubenswrapper[4872]: I0203 06:05:17.010385 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 06:05:17 crc kubenswrapper[4872]: I0203 06:05:17.175466 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 06:05:17 crc kubenswrapper[4872]: I0203 06:05:17.276751 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 06:05:17 crc kubenswrapper[4872]: I0203 06:05:17.499895 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 06:05:17 crc kubenswrapper[4872]: I0203 06:05:17.637337 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 06:05:17 crc kubenswrapper[4872]: I0203 06:05:17.954821 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.040677 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.048382 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.173931 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.174609 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.227257 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.489529 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.506184 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.574451 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.737781 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 06:05:18 crc kubenswrapper[4872]: I0203 06:05:18.769534 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 06:05:19 crc kubenswrapper[4872]: I0203 06:05:19.020817 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 06:05:19 crc kubenswrapper[4872]: I0203 06:05:19.100307 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 06:05:20 crc kubenswrapper[4872]: I0203 06:05:20.974919 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 06:05:20 crc kubenswrapper[4872]: I0203 06:05:20.975280 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.071221 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.071305 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.071356 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.071409 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.071436 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.072305 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.072361 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.073273 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.073314 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.091015 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.174289 4872 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.174333 4872 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.174350 4872 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.174366 4872 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.174378 4872 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.212547 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.212593 4872 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b" exitCode=137 Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.212636 4872 scope.go:117] "RemoveContainer" containerID="d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.212711 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.231996 4872 scope.go:117] "RemoveContainer" containerID="d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b" Feb 03 06:05:21 crc kubenswrapper[4872]: E0203 06:05:21.232358 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b\": container with ID starting with d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b not found: ID does not exist" containerID="d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b" Feb 03 06:05:21 crc kubenswrapper[4872]: I0203 06:05:21.232394 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b"} err="failed to get container status \"d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b\": rpc error: code = NotFound desc = could not find container \"d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b\": container with ID starting with d7f624cd8ca571cc9ea28c6bf08119cc0dffba32e8ab3d7e68b69f72bdcf341b not found: ID does not exist" Feb 03 06:05:22 crc kubenswrapper[4872]: I0203 06:05:22.133522 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 03 06:05:22 crc kubenswrapper[4872]: I0203 06:05:22.134234 4872 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 03 06:05:22 crc kubenswrapper[4872]: I0203 06:05:22.147181 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 06:05:22 crc kubenswrapper[4872]: I0203 06:05:22.147254 4872 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c0874f6c-99bb-4430-b331-01ad7f415e9f" Feb 03 06:05:22 crc kubenswrapper[4872]: I0203 06:05:22.153532 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 06:05:22 crc kubenswrapper[4872]: I0203 06:05:22.153591 4872 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c0874f6c-99bb-4430-b331-01ad7f415e9f" Feb 03 06:05:29 crc kubenswrapper[4872]: I0203 06:05:29.890561 4872 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 03 06:05:39 crc kubenswrapper[4872]: I0203 06:05:39.429107 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 06:05:39 crc kubenswrapper[4872]: I0203 06:05:39.765001 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bk69"] Feb 03 06:05:39 crc kubenswrapper[4872]: I0203 06:05:39.765245 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" podUID="7f8ddd98-412e-4a11-9cc2-07595b9cfdba" containerName="controller-manager" containerID="cri-o://571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8" gracePeriod=30 Feb 03 06:05:39 crc kubenswrapper[4872]: I0203 06:05:39.769120 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th"] Feb 03 06:05:39 crc kubenswrapper[4872]: I0203 06:05:39.769374 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" podUID="aeef1cbf-eef5-48f8-b111-6b7244d686d4" containerName="route-controller-manager" containerID="cri-o://1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89" gracePeriod=30 Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.178404 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.179518 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210092 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef1cbf-eef5-48f8-b111-6b7244d686d4-serving-cert\") pod \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210138 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-config\") pod \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210159 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-client-ca\") pod \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210174 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-config\") pod \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210203 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-proxy-ca-bundles\") pod \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210222 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smrbk\" (UniqueName: \"kubernetes.io/projected/aeef1cbf-eef5-48f8-b111-6b7244d686d4-kube-api-access-smrbk\") pod \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210248 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-serving-cert\") pod \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210283 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mtdk\" (UniqueName: \"kubernetes.io/projected/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-kube-api-access-4mtdk\") pod \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\" (UID: \"7f8ddd98-412e-4a11-9cc2-07595b9cfdba\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.210331 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-client-ca\") pod \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\" (UID: \"aeef1cbf-eef5-48f8-b111-6b7244d686d4\") " Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.212082 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-config" (OuterVolumeSpecName: "config") pod "aeef1cbf-eef5-48f8-b111-6b7244d686d4" (UID: "aeef1cbf-eef5-48f8-b111-6b7244d686d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.212158 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "aeef1cbf-eef5-48f8-b111-6b7244d686d4" (UID: "aeef1cbf-eef5-48f8-b111-6b7244d686d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.213440 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7f8ddd98-412e-4a11-9cc2-07595b9cfdba" (UID: "7f8ddd98-412e-4a11-9cc2-07595b9cfdba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.213850 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f8ddd98-412e-4a11-9cc2-07595b9cfdba" (UID: "7f8ddd98-412e-4a11-9cc2-07595b9cfdba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.218353 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-config" (OuterVolumeSpecName: "config") pod "7f8ddd98-412e-4a11-9cc2-07595b9cfdba" (UID: "7f8ddd98-412e-4a11-9cc2-07595b9cfdba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.219027 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f8ddd98-412e-4a11-9cc2-07595b9cfdba" (UID: "7f8ddd98-412e-4a11-9cc2-07595b9cfdba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.223528 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeef1cbf-eef5-48f8-b111-6b7244d686d4-kube-api-access-smrbk" (OuterVolumeSpecName: "kube-api-access-smrbk") pod "aeef1cbf-eef5-48f8-b111-6b7244d686d4" (UID: "aeef1cbf-eef5-48f8-b111-6b7244d686d4"). InnerVolumeSpecName "kube-api-access-smrbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.226197 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeef1cbf-eef5-48f8-b111-6b7244d686d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aeef1cbf-eef5-48f8-b111-6b7244d686d4" (UID: "aeef1cbf-eef5-48f8-b111-6b7244d686d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.229184 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-kube-api-access-4mtdk" (OuterVolumeSpecName: "kube-api-access-4mtdk") pod "7f8ddd98-412e-4a11-9cc2-07595b9cfdba" (UID: "7f8ddd98-412e-4a11-9cc2-07595b9cfdba"). InnerVolumeSpecName "kube-api-access-4mtdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311269 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef1cbf-eef5-48f8-b111-6b7244d686d4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311294 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311302 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311310 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311337 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311349 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smrbk\" (UniqueName: \"kubernetes.io/projected/aeef1cbf-eef5-48f8-b111-6b7244d686d4-kube-api-access-smrbk\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311358 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311367 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mtdk\" (UniqueName: \"kubernetes.io/projected/7f8ddd98-412e-4a11-9cc2-07595b9cfdba-kube-api-access-4mtdk\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.311374 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef1cbf-eef5-48f8-b111-6b7244d686d4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.382839 4872 generic.go:334] "Generic (PLEG): container finished" podID="aeef1cbf-eef5-48f8-b111-6b7244d686d4" containerID="1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89" exitCode=0 Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.382882 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.382914 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" event={"ID":"aeef1cbf-eef5-48f8-b111-6b7244d686d4","Type":"ContainerDied","Data":"1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89"} Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.382957 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th" event={"ID":"aeef1cbf-eef5-48f8-b111-6b7244d686d4","Type":"ContainerDied","Data":"d48315956ea3c2acf57f795434e4e453f1fcff7cd714c8fbfbe61f309acde265"} Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.382973 4872 scope.go:117] "RemoveContainer" containerID="1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.390801 4872 generic.go:334] "Generic (PLEG): container finished" podID="7f8ddd98-412e-4a11-9cc2-07595b9cfdba" containerID="571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8" exitCode=0 Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.390915 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.390945 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" event={"ID":"7f8ddd98-412e-4a11-9cc2-07595b9cfdba","Type":"ContainerDied","Data":"571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8"} Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.391312 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4bk69" event={"ID":"7f8ddd98-412e-4a11-9cc2-07595b9cfdba","Type":"ContainerDied","Data":"ccb4911ef3d18f52922dcd278f1f3cadaa60ee38a9bd9cfa2bb068b9c22c21ea"} Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.406220 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th"] Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.409706 4872 scope.go:117] "RemoveContainer" containerID="1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89" Feb 03 06:05:40 crc kubenswrapper[4872]: E0203 06:05:40.410452 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89\": container with ID starting with 1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89 not found: ID does not exist" containerID="1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.410491 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89"} err="failed to get container status \"1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89\": rpc error: code = NotFound desc = could not find container \"1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89\": container with ID starting with 1f7cda355bc879fa9ebdbb72514939eaa87328d30a9d30378e110f2b481c9a89 not found: ID does not exist" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.410510 4872 scope.go:117] "RemoveContainer" containerID="571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.412330 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jq7th"] Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.441163 4872 scope.go:117] "RemoveContainer" containerID="571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8" Feb 03 06:05:40 crc kubenswrapper[4872]: E0203 06:05:40.441755 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8\": container with ID starting with 571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8 not found: ID does not exist" containerID="571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.441829 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8"} err="failed to get container status \"571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8\": rpc error: code = NotFound desc = could not find container \"571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8\": container with ID starting with 571ef2dcc0ebc84c4b73edec4465a5091bb3a7f06a71c5585dccf247bde9d5e8 not found: ID does not exist" Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.447766 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bk69"] Feb 03 06:05:40 crc kubenswrapper[4872]: I0203 06:05:40.451500 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bk69"] Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.714442 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-cqjlw"] Feb 03 06:05:41 crc kubenswrapper[4872]: E0203 06:05:41.715248 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeef1cbf-eef5-48f8-b111-6b7244d686d4" containerName="route-controller-manager" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.715272 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeef1cbf-eef5-48f8-b111-6b7244d686d4" containerName="route-controller-manager" Feb 03 06:05:41 crc kubenswrapper[4872]: E0203 06:05:41.715293 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" containerName="installer" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.715305 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" containerName="installer" Feb 03 06:05:41 crc kubenswrapper[4872]: E0203 06:05:41.715322 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8ddd98-412e-4a11-9cc2-07595b9cfdba" containerName="controller-manager" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.715337 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8ddd98-412e-4a11-9cc2-07595b9cfdba" containerName="controller-manager" Feb 03 06:05:41 crc kubenswrapper[4872]: E0203 06:05:41.715353 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.715365 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.715516 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.715538 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeef1cbf-eef5-48f8-b111-6b7244d686d4" containerName="route-controller-manager" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.715563 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="885ce7ef-0dc0-4f65-8ae0-296be20278fc" containerName="installer" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.715580 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8ddd98-412e-4a11-9cc2-07595b9cfdba" containerName="controller-manager" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.716246 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.717929 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx"] Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.718634 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.721384 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.721863 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.722160 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.722696 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.722897 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.723004 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.723105 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.725226 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.726928 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.727923 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.727964 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.728140 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.735419 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-cqjlw"] Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.741449 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.759284 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx"] Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833241 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-proxy-ca-bundles\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833297 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-config\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833347 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a30e702-af13-444a-87ee-f6d9bc19bd7a-serving-cert\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833418 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-client-ca\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833456 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-client-ca\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833504 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5be461-2648-4dbe-9e11-fe1375d7ee26-serving-cert\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833544 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gtg\" (UniqueName: \"kubernetes.io/projected/6a30e702-af13-444a-87ee-f6d9bc19bd7a-kube-api-access-n4gtg\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833606 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-config\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.833716 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjst\" (UniqueName: \"kubernetes.io/projected/4c5be461-2648-4dbe-9e11-fe1375d7ee26-kube-api-access-wrjst\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.908127 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-cqjlw"] Feb 03 06:05:41 crc kubenswrapper[4872]: E0203 06:05:41.908594 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-wrjst proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" podUID="4c5be461-2648-4dbe-9e11-fe1375d7ee26" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.928170 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx"] Feb 03 06:05:41 crc kubenswrapper[4872]: E0203 06:05:41.928650 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-n4gtg serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" podUID="6a30e702-af13-444a-87ee-f6d9bc19bd7a" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.934822 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-proxy-ca-bundles\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.934869 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-config\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.934886 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a30e702-af13-444a-87ee-f6d9bc19bd7a-serving-cert\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.934927 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-client-ca\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.934954 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-client-ca\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.934972 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5be461-2648-4dbe-9e11-fe1375d7ee26-serving-cert\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.934994 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gtg\" (UniqueName: \"kubernetes.io/projected/6a30e702-af13-444a-87ee-f6d9bc19bd7a-kube-api-access-n4gtg\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.935012 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-config\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.935035 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjst\" (UniqueName: \"kubernetes.io/projected/4c5be461-2648-4dbe-9e11-fe1375d7ee26-kube-api-access-wrjst\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.936264 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-client-ca\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.936405 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-proxy-ca-bundles\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.936865 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-config\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.936966 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-config\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.937279 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-client-ca\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.941530 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a30e702-af13-444a-87ee-f6d9bc19bd7a-serving-cert\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.951693 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5be461-2648-4dbe-9e11-fe1375d7ee26-serving-cert\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.978423 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gtg\" (UniqueName: \"kubernetes.io/projected/6a30e702-af13-444a-87ee-f6d9bc19bd7a-kube-api-access-n4gtg\") pod \"route-controller-manager-96b64b5cc-jmntx\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:41 crc kubenswrapper[4872]: I0203 06:05:41.983350 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjst\" (UniqueName: \"kubernetes.io/projected/4c5be461-2648-4dbe-9e11-fe1375d7ee26-kube-api-access-wrjst\") pod \"controller-manager-796b84794c-cqjlw\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.128330 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8ddd98-412e-4a11-9cc2-07595b9cfdba" path="/var/lib/kubelet/pods/7f8ddd98-412e-4a11-9cc2-07595b9cfdba/volumes" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.128833 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeef1cbf-eef5-48f8-b111-6b7244d686d4" path="/var/lib/kubelet/pods/aeef1cbf-eef5-48f8-b111-6b7244d686d4/volumes" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.405948 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.405913 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.416657 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.424238 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541681 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-config\") pod \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541743 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-client-ca\") pod \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541776 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4gtg\" (UniqueName: \"kubernetes.io/projected/6a30e702-af13-444a-87ee-f6d9bc19bd7a-kube-api-access-n4gtg\") pod \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541793 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-config\") pod \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541830 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5be461-2648-4dbe-9e11-fe1375d7ee26-serving-cert\") pod \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541849 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-proxy-ca-bundles\") pod \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541894 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjst\" (UniqueName: \"kubernetes.io/projected/4c5be461-2648-4dbe-9e11-fe1375d7ee26-kube-api-access-wrjst\") pod \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541924 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a30e702-af13-444a-87ee-f6d9bc19bd7a-serving-cert\") pod \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\" (UID: \"6a30e702-af13-444a-87ee-f6d9bc19bd7a\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.541960 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-client-ca\") pod \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\" (UID: \"4c5be461-2648-4dbe-9e11-fe1375d7ee26\") " Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.542475 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c5be461-2648-4dbe-9e11-fe1375d7ee26" (UID: "4c5be461-2648-4dbe-9e11-fe1375d7ee26"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.542824 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-config" (OuterVolumeSpecName: "config") pod "6a30e702-af13-444a-87ee-f6d9bc19bd7a" (UID: "6a30e702-af13-444a-87ee-f6d9bc19bd7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.542862 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-config" (OuterVolumeSpecName: "config") pod "4c5be461-2648-4dbe-9e11-fe1375d7ee26" (UID: "4c5be461-2648-4dbe-9e11-fe1375d7ee26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.543522 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a30e702-af13-444a-87ee-f6d9bc19bd7a" (UID: "6a30e702-af13-444a-87ee-f6d9bc19bd7a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.544739 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4c5be461-2648-4dbe-9e11-fe1375d7ee26" (UID: "4c5be461-2648-4dbe-9e11-fe1375d7ee26"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.545411 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5be461-2648-4dbe-9e11-fe1375d7ee26-kube-api-access-wrjst" (OuterVolumeSpecName: "kube-api-access-wrjst") pod "4c5be461-2648-4dbe-9e11-fe1375d7ee26" (UID: "4c5be461-2648-4dbe-9e11-fe1375d7ee26"). InnerVolumeSpecName "kube-api-access-wrjst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.546953 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a30e702-af13-444a-87ee-f6d9bc19bd7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a30e702-af13-444a-87ee-f6d9bc19bd7a" (UID: "6a30e702-af13-444a-87ee-f6d9bc19bd7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.548041 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5be461-2648-4dbe-9e11-fe1375d7ee26-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c5be461-2648-4dbe-9e11-fe1375d7ee26" (UID: "4c5be461-2648-4dbe-9e11-fe1375d7ee26"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.548171 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a30e702-af13-444a-87ee-f6d9bc19bd7a-kube-api-access-n4gtg" (OuterVolumeSpecName: "kube-api-access-n4gtg") pod "6a30e702-af13-444a-87ee-f6d9bc19bd7a" (UID: "6a30e702-af13-444a-87ee-f6d9bc19bd7a"). InnerVolumeSpecName "kube-api-access-n4gtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643131 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643162 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643171 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643184 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4gtg\" (UniqueName: \"kubernetes.io/projected/6a30e702-af13-444a-87ee-f6d9bc19bd7a-kube-api-access-n4gtg\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643198 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a30e702-af13-444a-87ee-f6d9bc19bd7a-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643208 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5be461-2648-4dbe-9e11-fe1375d7ee26-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643218 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c5be461-2648-4dbe-9e11-fe1375d7ee26-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643227 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrjst\" (UniqueName: \"kubernetes.io/projected/4c5be461-2648-4dbe-9e11-fe1375d7ee26-kube-api-access-wrjst\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:42 crc kubenswrapper[4872]: I0203 06:05:42.643235 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a30e702-af13-444a-87ee-f6d9bc19bd7a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.097513 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xwrls"] Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.098362 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.103326 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.118397 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xwrls"] Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.150088 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1202db8-36c7-47b9-b951-be67cc9c1c44-catalog-content\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.150157 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1202db8-36c7-47b9-b951-be67cc9c1c44-utilities\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.150396 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpn74\" (UniqueName: \"kubernetes.io/projected/d1202db8-36c7-47b9-b951-be67cc9c1c44-kube-api-access-zpn74\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.251362 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1202db8-36c7-47b9-b951-be67cc9c1c44-catalog-content\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.251782 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1202db8-36c7-47b9-b951-be67cc9c1c44-utilities\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.251977 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpn74\" (UniqueName: \"kubernetes.io/projected/d1202db8-36c7-47b9-b951-be67cc9c1c44-kube-api-access-zpn74\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.252518 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1202db8-36c7-47b9-b951-be67cc9c1c44-catalog-content\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.252845 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1202db8-36c7-47b9-b951-be67cc9c1c44-utilities\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.273769 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpn74\" (UniqueName: \"kubernetes.io/projected/d1202db8-36c7-47b9-b951-be67cc9c1c44-kube-api-access-zpn74\") pod \"community-operators-xwrls\" (UID: \"d1202db8-36c7-47b9-b951-be67cc9c1c44\") " pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.412649 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.412786 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.413172 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b84794c-cqjlw" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.512415 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx"] Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.520365 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t"] Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.521927 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.524235 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96b64b5cc-jmntx"] Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.527869 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.528244 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.528398 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.528580 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.530715 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.531127 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.536353 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t"] Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.551157 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-cqjlw"] Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.554145 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-796b84794c-cqjlw"] Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.554734 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-794q6\" (UniqueName: \"kubernetes.io/projected/812189a7-8048-4197-9160-a703bfefcee2-kube-api-access-794q6\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.554839 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-client-ca\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.558593 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-config\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.558910 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812189a7-8048-4197-9160-a703bfefcee2-serving-cert\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.663381 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-794q6\" (UniqueName: \"kubernetes.io/projected/812189a7-8048-4197-9160-a703bfefcee2-kube-api-access-794q6\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.663440 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-client-ca\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.663505 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-config\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.663534 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812189a7-8048-4197-9160-a703bfefcee2-serving-cert\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.664772 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-client-ca\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.667207 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-config\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.672218 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xwrls"] Feb 03 06:05:43 crc kubenswrapper[4872]: W0203 06:05:43.684890 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1202db8_36c7_47b9_b951_be67cc9c1c44.slice/crio-004c41ab216dfe875d139512045339b90d378fa8245f0a4c05c5d742429796f5 WatchSource:0}: Error finding container 004c41ab216dfe875d139512045339b90d378fa8245f0a4c05c5d742429796f5: Status 404 returned error can't find the container with id 004c41ab216dfe875d139512045339b90d378fa8245f0a4c05c5d742429796f5 Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.686514 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812189a7-8048-4197-9160-a703bfefcee2-serving-cert\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.688063 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-794q6\" (UniqueName: \"kubernetes.io/projected/812189a7-8048-4197-9160-a703bfefcee2-kube-api-access-794q6\") pod \"route-controller-manager-69c7cb958d-lzm7t\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:43 crc kubenswrapper[4872]: I0203 06:05:43.837887 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.095176 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t"] Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.128527 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5be461-2648-4dbe-9e11-fe1375d7ee26" path="/var/lib/kubelet/pods/4c5be461-2648-4dbe-9e11-fe1375d7ee26/volumes" Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.129269 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a30e702-af13-444a-87ee-f6d9bc19bd7a" path="/var/lib/kubelet/pods/6a30e702-af13-444a-87ee-f6d9bc19bd7a/volumes" Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.418646 4872 generic.go:334] "Generic (PLEG): container finished" podID="d1202db8-36c7-47b9-b951-be67cc9c1c44" containerID="5d168fa7239b0410055d35de35e1c4076a86bc22cfe14df5d86a03d549848ea4" exitCode=0 Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.418872 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwrls" event={"ID":"d1202db8-36c7-47b9-b951-be67cc9c1c44","Type":"ContainerDied","Data":"5d168fa7239b0410055d35de35e1c4076a86bc22cfe14df5d86a03d549848ea4"} Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.418925 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwrls" event={"ID":"d1202db8-36c7-47b9-b951-be67cc9c1c44","Type":"ContainerStarted","Data":"004c41ab216dfe875d139512045339b90d378fa8245f0a4c05c5d742429796f5"} Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.422191 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" event={"ID":"812189a7-8048-4197-9160-a703bfefcee2","Type":"ContainerStarted","Data":"6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c"} Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.425825 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.425933 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" event={"ID":"812189a7-8048-4197-9160-a703bfefcee2","Type":"ContainerStarted","Data":"041b6a7bf03a0fb94b303b0ae1ffa89f925bde073135bd8d4c8eca0658382598"} Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.462389 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" podStartSLOduration=2.462362428 podStartE2EDuration="2.462362428s" podCreationTimestamp="2026-02-03 06:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:05:44.459271807 +0000 UTC m=+315.041963261" watchObservedRunningTime="2026-02-03 06:05:44.462362428 +0000 UTC m=+315.045053882" Feb 03 06:05:44 crc kubenswrapper[4872]: I0203 06:05:44.758096 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.508808 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-prm6b"] Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.510614 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.514119 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.521976 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-prm6b"] Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.589359 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-utilities\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.589576 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-catalog-content\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.589650 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r874k\" (UniqueName: \"kubernetes.io/projected/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-kube-api-access-r874k\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.690521 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-catalog-content\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.690585 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r874k\" (UniqueName: \"kubernetes.io/projected/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-kube-api-access-r874k\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.690616 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-utilities\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.691304 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-utilities\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.691780 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-catalog-content\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.716506 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r874k\" (UniqueName: \"kubernetes.io/projected/45ac7feb-4f3a-459f-ab99-1409ff1e62bb-kube-api-access-r874k\") pod \"redhat-operators-prm6b\" (UID: \"45ac7feb-4f3a-459f-ab99-1409ff1e62bb\") " pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.722825 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-qw2w6"] Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.723804 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.730400 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.730741 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.731033 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.731326 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.731617 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.731809 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.746562 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-qw2w6"] Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.750671 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.792104 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-config\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.792169 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dfa144e-9e70-4e50-867d-4b2cf104549d-serving-cert\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.792197 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-proxy-ca-bundles\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.792242 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nrwb\" (UniqueName: \"kubernetes.io/projected/2dfa144e-9e70-4e50-867d-4b2cf104549d-kube-api-access-4nrwb\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.792410 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-client-ca\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.850950 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.895518 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-config\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.895599 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dfa144e-9e70-4e50-867d-4b2cf104549d-serving-cert\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.895633 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-proxy-ca-bundles\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.895720 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nrwb\" (UniqueName: \"kubernetes.io/projected/2dfa144e-9e70-4e50-867d-4b2cf104549d-kube-api-access-4nrwb\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.895785 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-client-ca\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.896732 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-client-ca\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.898005 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-config\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.903452 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-proxy-ca-bundles\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.907031 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dfa144e-9e70-4e50-867d-4b2cf104549d-serving-cert\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:45 crc kubenswrapper[4872]: I0203 06:05:45.919295 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nrwb\" (UniqueName: \"kubernetes.io/projected/2dfa144e-9e70-4e50-867d-4b2cf104549d-kube-api-access-4nrwb\") pod \"controller-manager-84765f478-qw2w6\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:46 crc kubenswrapper[4872]: I0203 06:05:46.058289 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:46 crc kubenswrapper[4872]: I0203 06:05:46.281591 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-prm6b"] Feb 03 06:05:46 crc kubenswrapper[4872]: W0203 06:05:46.325136 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ac7feb_4f3a_459f_ab99_1409ff1e62bb.slice/crio-8528bfac0290d869ea0a961b5875d1a97f1919f1d77b7b6c21c2e9d893b27669 WatchSource:0}: Error finding container 8528bfac0290d869ea0a961b5875d1a97f1919f1d77b7b6c21c2e9d893b27669: Status 404 returned error can't find the container with id 8528bfac0290d869ea0a961b5875d1a97f1919f1d77b7b6c21c2e9d893b27669 Feb 03 06:05:46 crc kubenswrapper[4872]: I0203 06:05:46.436205 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prm6b" event={"ID":"45ac7feb-4f3a-459f-ab99-1409ff1e62bb","Type":"ContainerStarted","Data":"8528bfac0290d869ea0a961b5875d1a97f1919f1d77b7b6c21c2e9d893b27669"} Feb 03 06:05:46 crc kubenswrapper[4872]: I0203 06:05:46.439681 4872 generic.go:334] "Generic (PLEG): container finished" podID="d1202db8-36c7-47b9-b951-be67cc9c1c44" containerID="7f7ff65c78a68b11c611b4cca3e2d06c195b30af866a9790964c0ae3bf86f141" exitCode=0 Feb 03 06:05:46 crc kubenswrapper[4872]: I0203 06:05:46.440651 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwrls" event={"ID":"d1202db8-36c7-47b9-b951-be67cc9c1c44","Type":"ContainerDied","Data":"7f7ff65c78a68b11c611b4cca3e2d06c195b30af866a9790964c0ae3bf86f141"} Feb 03 06:05:46 crc kubenswrapper[4872]: I0203 06:05:46.500275 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-qw2w6"] Feb 03 06:05:46 crc kubenswrapper[4872]: W0203 06:05:46.509270 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dfa144e_9e70_4e50_867d_4b2cf104549d.slice/crio-986d03a438aa25d354a63480274ed059baa7c6dd27aa122171cddbd78abdec0c WatchSource:0}: Error finding container 986d03a438aa25d354a63480274ed059baa7c6dd27aa122171cddbd78abdec0c: Status 404 returned error can't find the container with id 986d03a438aa25d354a63480274ed059baa7c6dd27aa122171cddbd78abdec0c Feb 03 06:05:47 crc kubenswrapper[4872]: I0203 06:05:47.445390 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" event={"ID":"2dfa144e-9e70-4e50-867d-4b2cf104549d","Type":"ContainerStarted","Data":"8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58"} Feb 03 06:05:47 crc kubenswrapper[4872]: I0203 06:05:47.445667 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" event={"ID":"2dfa144e-9e70-4e50-867d-4b2cf104549d","Type":"ContainerStarted","Data":"986d03a438aa25d354a63480274ed059baa7c6dd27aa122171cddbd78abdec0c"} Feb 03 06:05:47 crc kubenswrapper[4872]: I0203 06:05:47.445907 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:47 crc kubenswrapper[4872]: I0203 06:05:47.447583 4872 generic.go:334] "Generic (PLEG): container finished" podID="45ac7feb-4f3a-459f-ab99-1409ff1e62bb" containerID="2ca35144f93c9e32d9b53651c903c09f82aa07bb31379ace47bfcd5963298156" exitCode=0 Feb 03 06:05:47 crc kubenswrapper[4872]: I0203 06:05:47.447627 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prm6b" event={"ID":"45ac7feb-4f3a-459f-ab99-1409ff1e62bb","Type":"ContainerDied","Data":"2ca35144f93c9e32d9b53651c903c09f82aa07bb31379ace47bfcd5963298156"} Feb 03 06:05:47 crc kubenswrapper[4872]: I0203 06:05:47.450370 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwrls" event={"ID":"d1202db8-36c7-47b9-b951-be67cc9c1c44","Type":"ContainerStarted","Data":"c1fef1a07b02474f357c478f6f29e2cd6caec6d058ab9763dfe50c0c8be691c4"} Feb 03 06:05:47 crc kubenswrapper[4872]: I0203 06:05:47.453916 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:05:47 crc kubenswrapper[4872]: I0203 06:05:47.466169 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" podStartSLOduration=5.466151983 podStartE2EDuration="5.466151983s" podCreationTimestamp="2026-02-03 06:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:05:47.463793191 +0000 UTC m=+318.046484605" watchObservedRunningTime="2026-02-03 06:05:47.466151983 +0000 UTC m=+318.048843397" Feb 03 06:05:48 crc kubenswrapper[4872]: I0203 06:05:48.456005 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prm6b" event={"ID":"45ac7feb-4f3a-459f-ab99-1409ff1e62bb","Type":"ContainerStarted","Data":"cfe160056a013f795563a22d70380a33d41f11d7f69e34dd04949001520b7fbf"} Feb 03 06:05:48 crc kubenswrapper[4872]: I0203 06:05:48.473386 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xwrls" podStartSLOduration=3.053177164 podStartE2EDuration="5.473373811s" podCreationTimestamp="2026-02-03 06:05:43 +0000 UTC" firstStartedPulling="2026-02-03 06:05:44.420529677 +0000 UTC m=+315.003221091" lastFinishedPulling="2026-02-03 06:05:46.840726324 +0000 UTC m=+317.423417738" observedRunningTime="2026-02-03 06:05:47.541327584 +0000 UTC m=+318.124018998" watchObservedRunningTime="2026-02-03 06:05:48.473373811 +0000 UTC m=+319.056065225" Feb 03 06:05:49 crc kubenswrapper[4872]: I0203 06:05:49.462876 4872 generic.go:334] "Generic (PLEG): container finished" podID="45ac7feb-4f3a-459f-ab99-1409ff1e62bb" containerID="cfe160056a013f795563a22d70380a33d41f11d7f69e34dd04949001520b7fbf" exitCode=0 Feb 03 06:05:49 crc kubenswrapper[4872]: I0203 06:05:49.464721 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prm6b" event={"ID":"45ac7feb-4f3a-459f-ab99-1409ff1e62bb","Type":"ContainerDied","Data":"cfe160056a013f795563a22d70380a33d41f11d7f69e34dd04949001520b7fbf"} Feb 03 06:05:50 crc kubenswrapper[4872]: I0203 06:05:50.363325 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 06:05:50 crc kubenswrapper[4872]: I0203 06:05:50.471384 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-prm6b" event={"ID":"45ac7feb-4f3a-459f-ab99-1409ff1e62bb","Type":"ContainerStarted","Data":"cc3c4a14e3750f8f174c99b568c6a1b612de0fbbea2bdf2efd841ab5738236aa"} Feb 03 06:05:50 crc kubenswrapper[4872]: I0203 06:05:50.496899 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-prm6b" podStartSLOduration=2.966790044 podStartE2EDuration="5.496883077s" podCreationTimestamp="2026-02-03 06:05:45 +0000 UTC" firstStartedPulling="2026-02-03 06:05:47.449244518 +0000 UTC m=+318.031935932" lastFinishedPulling="2026-02-03 06:05:49.979337541 +0000 UTC m=+320.562028965" observedRunningTime="2026-02-03 06:05:50.494815273 +0000 UTC m=+321.077506687" watchObservedRunningTime="2026-02-03 06:05:50.496883077 +0000 UTC m=+321.079574501" Feb 03 06:05:51 crc kubenswrapper[4872]: I0203 06:05:51.139643 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 06:05:53 crc kubenswrapper[4872]: I0203 06:05:53.413523 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:53 crc kubenswrapper[4872]: I0203 06:05:53.414304 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:53 crc kubenswrapper[4872]: I0203 06:05:53.473294 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:53 crc kubenswrapper[4872]: I0203 06:05:53.539223 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xwrls" Feb 03 06:05:55 crc kubenswrapper[4872]: I0203 06:05:55.851521 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:55 crc kubenswrapper[4872]: I0203 06:05:55.851824 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:05:56 crc kubenswrapper[4872]: I0203 06:05:56.905312 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-prm6b" podUID="45ac7feb-4f3a-459f-ab99-1409ff1e62bb" containerName="registry-server" probeResult="failure" output=< Feb 03 06:05:56 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:05:56 crc kubenswrapper[4872]: > Feb 03 06:05:59 crc kubenswrapper[4872]: I0203 06:05:59.748348 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t"] Feb 03 06:05:59 crc kubenswrapper[4872]: I0203 06:05:59.748863 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" podUID="812189a7-8048-4197-9160-a703bfefcee2" containerName="route-controller-manager" containerID="cri-o://6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c" gracePeriod=30 Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.197164 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.295527 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-client-ca\") pod \"812189a7-8048-4197-9160-a703bfefcee2\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.295599 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812189a7-8048-4197-9160-a703bfefcee2-serving-cert\") pod \"812189a7-8048-4197-9160-a703bfefcee2\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.295621 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-config\") pod \"812189a7-8048-4197-9160-a703bfefcee2\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.295725 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-794q6\" (UniqueName: \"kubernetes.io/projected/812189a7-8048-4197-9160-a703bfefcee2-kube-api-access-794q6\") pod \"812189a7-8048-4197-9160-a703bfefcee2\" (UID: \"812189a7-8048-4197-9160-a703bfefcee2\") " Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.297449 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-client-ca" (OuterVolumeSpecName: "client-ca") pod "812189a7-8048-4197-9160-a703bfefcee2" (UID: "812189a7-8048-4197-9160-a703bfefcee2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.298070 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-config" (OuterVolumeSpecName: "config") pod "812189a7-8048-4197-9160-a703bfefcee2" (UID: "812189a7-8048-4197-9160-a703bfefcee2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.306869 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812189a7-8048-4197-9160-a703bfefcee2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "812189a7-8048-4197-9160-a703bfefcee2" (UID: "812189a7-8048-4197-9160-a703bfefcee2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.306940 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812189a7-8048-4197-9160-a703bfefcee2-kube-api-access-794q6" (OuterVolumeSpecName: "kube-api-access-794q6") pod "812189a7-8048-4197-9160-a703bfefcee2" (UID: "812189a7-8048-4197-9160-a703bfefcee2"). InnerVolumeSpecName "kube-api-access-794q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.397324 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-794q6\" (UniqueName: \"kubernetes.io/projected/812189a7-8048-4197-9160-a703bfefcee2-kube-api-access-794q6\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.397361 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.397375 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812189a7-8048-4197-9160-a703bfefcee2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.397384 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812189a7-8048-4197-9160-a703bfefcee2-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.525828 4872 generic.go:334] "Generic (PLEG): container finished" podID="812189a7-8048-4197-9160-a703bfefcee2" containerID="6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c" exitCode=0 Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.525892 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" event={"ID":"812189a7-8048-4197-9160-a703bfefcee2","Type":"ContainerDied","Data":"6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c"} Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.525931 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" event={"ID":"812189a7-8048-4197-9160-a703bfefcee2","Type":"ContainerDied","Data":"041b6a7bf03a0fb94b303b0ae1ffa89f925bde073135bd8d4c8eca0658382598"} Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.525959 4872 scope.go:117] "RemoveContainer" containerID="6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.526109 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.553989 4872 scope.go:117] "RemoveContainer" containerID="6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c" Feb 03 06:06:00 crc kubenswrapper[4872]: E0203 06:06:00.554740 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c\": container with ID starting with 6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c not found: ID does not exist" containerID="6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.554795 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c"} err="failed to get container status \"6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c\": rpc error: code = NotFound desc = could not find container \"6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c\": container with ID starting with 6dac79ea6fcae78235602a3a200b0ac6952a8a4cde96f91f6049cbe9611ca26c not found: ID does not exist" Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.566131 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t"] Feb 03 06:06:00 crc kubenswrapper[4872]: I0203 06:06:00.574547 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c7cb958d-lzm7t"] Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.271637 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.271755 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.737632 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4"] Feb 03 06:06:01 crc kubenswrapper[4872]: E0203 06:06:01.738138 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812189a7-8048-4197-9160-a703bfefcee2" containerName="route-controller-manager" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.738150 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="812189a7-8048-4197-9160-a703bfefcee2" containerName="route-controller-manager" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.738244 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="812189a7-8048-4197-9160-a703bfefcee2" containerName="route-controller-manager" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.738649 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.744073 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4"] Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.745283 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.745478 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.746223 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.746372 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.747081 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.747519 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.916610 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-config\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.916832 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-client-ca\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.916923 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-serving-cert\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:01 crc kubenswrapper[4872]: I0203 06:06:01.917026 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvc76\" (UniqueName: \"kubernetes.io/projected/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-kube-api-access-dvc76\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.018413 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-serving-cert\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.018733 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvc76\" (UniqueName: \"kubernetes.io/projected/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-kube-api-access-dvc76\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.018869 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-config\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.018977 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-client-ca\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.021307 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-config\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.021352 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-client-ca\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.025486 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-serving-cert\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.044297 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvc76\" (UniqueName: \"kubernetes.io/projected/47e23fe3-6dd8-44de-a9eb-45c29f7358ff-kube-api-access-dvc76\") pod \"route-controller-manager-5d879c7f78-n4mg4\" (UID: \"47e23fe3-6dd8-44de-a9eb-45c29f7358ff\") " pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.105658 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.133235 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812189a7-8048-4197-9160-a703bfefcee2" path="/var/lib/kubelet/pods/812189a7-8048-4197-9160-a703bfefcee2/volumes" Feb 03 06:06:02 crc kubenswrapper[4872]: I0203 06:06:02.585843 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4"] Feb 03 06:06:03 crc kubenswrapper[4872]: I0203 06:06:03.544552 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" event={"ID":"47e23fe3-6dd8-44de-a9eb-45c29f7358ff","Type":"ContainerStarted","Data":"213795ba2ae902d0ac42e5c2835d6877ef2e93b86cc87a3cc3328f6edb912b21"} Feb 03 06:06:03 crc kubenswrapper[4872]: I0203 06:06:03.544969 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:03 crc kubenswrapper[4872]: I0203 06:06:03.544992 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" event={"ID":"47e23fe3-6dd8-44de-a9eb-45c29f7358ff","Type":"ContainerStarted","Data":"818a9956285ccb908ce42bd7fea98169e2cc9586af931952b36a758a7877616b"} Feb 03 06:06:03 crc kubenswrapper[4872]: I0203 06:06:03.551718 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" Feb 03 06:06:03 crc kubenswrapper[4872]: I0203 06:06:03.566473 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d879c7f78-n4mg4" podStartSLOduration=4.566442899 podStartE2EDuration="4.566442899s" podCreationTimestamp="2026-02-03 06:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:06:03.564642763 +0000 UTC m=+334.147334187" watchObservedRunningTime="2026-02-03 06:06:03.566442899 +0000 UTC m=+334.149134363" Feb 03 06:06:05 crc kubenswrapper[4872]: I0203 06:06:05.908093 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:06:05 crc kubenswrapper[4872]: I0203 06:06:05.955123 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-prm6b" Feb 03 06:06:19 crc kubenswrapper[4872]: I0203 06:06:19.755564 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-qw2w6"] Feb 03 06:06:19 crc kubenswrapper[4872]: I0203 06:06:19.756446 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" podUID="2dfa144e-9e70-4e50-867d-4b2cf104549d" containerName="controller-manager" containerID="cri-o://8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58" gracePeriod=30 Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.224657 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.356741 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dfa144e-9e70-4e50-867d-4b2cf104549d-serving-cert\") pod \"2dfa144e-9e70-4e50-867d-4b2cf104549d\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.356807 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nrwb\" (UniqueName: \"kubernetes.io/projected/2dfa144e-9e70-4e50-867d-4b2cf104549d-kube-api-access-4nrwb\") pod \"2dfa144e-9e70-4e50-867d-4b2cf104549d\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.356841 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-config\") pod \"2dfa144e-9e70-4e50-867d-4b2cf104549d\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.356901 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-proxy-ca-bundles\") pod \"2dfa144e-9e70-4e50-867d-4b2cf104549d\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.356946 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-client-ca\") pod \"2dfa144e-9e70-4e50-867d-4b2cf104549d\" (UID: \"2dfa144e-9e70-4e50-867d-4b2cf104549d\") " Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.357663 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2dfa144e-9e70-4e50-867d-4b2cf104549d" (UID: "2dfa144e-9e70-4e50-867d-4b2cf104549d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.357846 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-client-ca" (OuterVolumeSpecName: "client-ca") pod "2dfa144e-9e70-4e50-867d-4b2cf104549d" (UID: "2dfa144e-9e70-4e50-867d-4b2cf104549d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.357952 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-config" (OuterVolumeSpecName: "config") pod "2dfa144e-9e70-4e50-867d-4b2cf104549d" (UID: "2dfa144e-9e70-4e50-867d-4b2cf104549d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.362058 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfa144e-9e70-4e50-867d-4b2cf104549d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2dfa144e-9e70-4e50-867d-4b2cf104549d" (UID: "2dfa144e-9e70-4e50-867d-4b2cf104549d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.365804 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfa144e-9e70-4e50-867d-4b2cf104549d-kube-api-access-4nrwb" (OuterVolumeSpecName: "kube-api-access-4nrwb") pod "2dfa144e-9e70-4e50-867d-4b2cf104549d" (UID: "2dfa144e-9e70-4e50-867d-4b2cf104549d"). InnerVolumeSpecName "kube-api-access-4nrwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.458938 4872 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.458999 4872 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dfa144e-9e70-4e50-867d-4b2cf104549d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.459015 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nrwb\" (UniqueName: \"kubernetes.io/projected/2dfa144e-9e70-4e50-867d-4b2cf104549d-kube-api-access-4nrwb\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.459030 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.459070 4872 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2dfa144e-9e70-4e50-867d-4b2cf104549d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.670412 4872 generic.go:334] "Generic (PLEG): container finished" podID="2dfa144e-9e70-4e50-867d-4b2cf104549d" containerID="8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58" exitCode=0 Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.670482 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" event={"ID":"2dfa144e-9e70-4e50-867d-4b2cf104549d","Type":"ContainerDied","Data":"8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58"} Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.670497 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.670534 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84765f478-qw2w6" event={"ID":"2dfa144e-9e70-4e50-867d-4b2cf104549d","Type":"ContainerDied","Data":"986d03a438aa25d354a63480274ed059baa7c6dd27aa122171cddbd78abdec0c"} Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.670565 4872 scope.go:117] "RemoveContainer" containerID="8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.695632 4872 scope.go:117] "RemoveContainer" containerID="8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58" Feb 03 06:06:20 crc kubenswrapper[4872]: E0203 06:06:20.696477 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58\": container with ID starting with 8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58 not found: ID does not exist" containerID="8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.696545 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58"} err="failed to get container status \"8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58\": rpc error: code = NotFound desc = could not find container \"8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58\": container with ID starting with 8fb5c0a56ad4c54347acfecd7e113d8159e3efe63e6b6a40343736b3c2ab3d58 not found: ID does not exist" Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.720615 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-qw2w6"] Feb 03 06:06:20 crc kubenswrapper[4872]: I0203 06:06:20.729638 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84765f478-qw2w6"] Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.757938 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dfc484c98-zcr7m"] Feb 03 06:06:21 crc kubenswrapper[4872]: E0203 06:06:21.758356 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfa144e-9e70-4e50-867d-4b2cf104549d" containerName="controller-manager" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.758386 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfa144e-9e70-4e50-867d-4b2cf104549d" containerName="controller-manager" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.758601 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfa144e-9e70-4e50-867d-4b2cf104549d" containerName="controller-manager" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.759399 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.768715 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.768929 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.769413 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.769617 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.772263 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.776887 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfc484c98-zcr7m"] Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.778100 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.779559 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.885745 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-client-ca\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.885818 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-proxy-ca-bundles\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.885850 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6febe877-c929-4369-9292-12952d55d6eb-serving-cert\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.885876 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7qpr\" (UniqueName: \"kubernetes.io/projected/6febe877-c929-4369-9292-12952d55d6eb-kube-api-access-w7qpr\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.885910 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-config\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.987246 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6febe877-c929-4369-9292-12952d55d6eb-serving-cert\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.987285 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7qpr\" (UniqueName: \"kubernetes.io/projected/6febe877-c929-4369-9292-12952d55d6eb-kube-api-access-w7qpr\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.987315 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-config\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.987358 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-client-ca\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.987390 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-proxy-ca-bundles\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.988346 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-proxy-ca-bundles\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.988914 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-client-ca\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.991155 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6febe877-c929-4369-9292-12952d55d6eb-config\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:21 crc kubenswrapper[4872]: I0203 06:06:21.995615 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6febe877-c929-4369-9292-12952d55d6eb-serving-cert\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.015522 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7qpr\" (UniqueName: \"kubernetes.io/projected/6febe877-c929-4369-9292-12952d55d6eb-kube-api-access-w7qpr\") pod \"controller-manager-6dfc484c98-zcr7m\" (UID: \"6febe877-c929-4369-9292-12952d55d6eb\") " pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.077003 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.137913 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dfa144e-9e70-4e50-867d-4b2cf104549d" path="/var/lib/kubelet/pods/2dfa144e-9e70-4e50-867d-4b2cf104549d/volumes" Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.518139 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfc484c98-zcr7m"] Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.684304 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" event={"ID":"6febe877-c929-4369-9292-12952d55d6eb","Type":"ContainerStarted","Data":"a6c5a9d812b8235790451881006682ab83819534835c57860cf71742a00930e5"} Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.684364 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" event={"ID":"6febe877-c929-4369-9292-12952d55d6eb","Type":"ContainerStarted","Data":"2a1aea45cdc033d7fb12de8ba7b1da208486589c1aa0230a89449bebc72336f7"} Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.685599 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.686780 4872 patch_prober.go:28] interesting pod/controller-manager-6dfc484c98-zcr7m container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.686839 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" podUID="6febe877-c929-4369-9292-12952d55d6eb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Feb 03 06:06:22 crc kubenswrapper[4872]: I0203 06:06:22.700520 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" podStartSLOduration=3.700501913 podStartE2EDuration="3.700501913s" podCreationTimestamp="2026-02-03 06:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:06:22.697495176 +0000 UTC m=+353.280186590" watchObservedRunningTime="2026-02-03 06:06:22.700501913 +0000 UTC m=+353.283193327" Feb 03 06:06:23 crc kubenswrapper[4872]: I0203 06:06:23.691522 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dfc484c98-zcr7m" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.136067 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9272h"] Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.137279 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.230953 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9272h"] Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.335233 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-bound-sa-token\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.335301 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69c45dda-9a9b-489d-afa3-0a218faa7443-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.335361 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69c45dda-9a9b-489d-afa3-0a218faa7443-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.335427 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-525r7\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-kube-api-access-525r7\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.335459 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-registry-tls\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.335609 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69c45dda-9a9b-489d-afa3-0a218faa7443-registry-certificates\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.335742 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69c45dda-9a9b-489d-afa3-0a218faa7443-trusted-ca\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.335813 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.384311 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.437140 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-525r7\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-kube-api-access-525r7\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.437184 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-registry-tls\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.437218 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69c45dda-9a9b-489d-afa3-0a218faa7443-registry-certificates\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.437246 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69c45dda-9a9b-489d-afa3-0a218faa7443-trusted-ca\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.437275 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-bound-sa-token\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.437304 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69c45dda-9a9b-489d-afa3-0a218faa7443-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.437346 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69c45dda-9a9b-489d-afa3-0a218faa7443-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.437761 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69c45dda-9a9b-489d-afa3-0a218faa7443-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.438584 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69c45dda-9a9b-489d-afa3-0a218faa7443-trusted-ca\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.439333 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69c45dda-9a9b-489d-afa3-0a218faa7443-registry-certificates\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.443486 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-registry-tls\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.449357 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69c45dda-9a9b-489d-afa3-0a218faa7443-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.452429 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-525r7\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-kube-api-access-525r7\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.456444 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69c45dda-9a9b-489d-afa3-0a218faa7443-bound-sa-token\") pod \"image-registry-66df7c8f76-9272h\" (UID: \"69c45dda-9a9b-489d-afa3-0a218faa7443\") " pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:30 crc kubenswrapper[4872]: I0203 06:06:30.749617 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:31 crc kubenswrapper[4872]: I0203 06:06:31.246468 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9272h"] Feb 03 06:06:31 crc kubenswrapper[4872]: W0203 06:06:31.259871 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69c45dda_9a9b_489d_afa3_0a218faa7443.slice/crio-fcd760bed4147ae70410cd951744e7edf5b21373c7ceba69dd2d85cce6e4eefc WatchSource:0}: Error finding container fcd760bed4147ae70410cd951744e7edf5b21373c7ceba69dd2d85cce6e4eefc: Status 404 returned error can't find the container with id fcd760bed4147ae70410cd951744e7edf5b21373c7ceba69dd2d85cce6e4eefc Feb 03 06:06:31 crc kubenswrapper[4872]: I0203 06:06:31.277624 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:06:31 crc kubenswrapper[4872]: I0203 06:06:31.277706 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:06:31 crc kubenswrapper[4872]: I0203 06:06:31.742480 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9272h" event={"ID":"69c45dda-9a9b-489d-afa3-0a218faa7443","Type":"ContainerStarted","Data":"bc2797e67467f5ef91f403bc34ee5bd2e54ba57064582051f12b84d9306029ce"} Feb 03 06:06:31 crc kubenswrapper[4872]: I0203 06:06:31.742529 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9272h" event={"ID":"69c45dda-9a9b-489d-afa3-0a218faa7443","Type":"ContainerStarted","Data":"fcd760bed4147ae70410cd951744e7edf5b21373c7ceba69dd2d85cce6e4eefc"} Feb 03 06:06:31 crc kubenswrapper[4872]: I0203 06:06:31.742794 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:50 crc kubenswrapper[4872]: I0203 06:06:50.760516 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9272h" Feb 03 06:06:50 crc kubenswrapper[4872]: I0203 06:06:50.794572 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9272h" podStartSLOduration=20.79453725 podStartE2EDuration="20.79453725s" podCreationTimestamp="2026-02-03 06:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:06:31.763900366 +0000 UTC m=+362.346591780" watchObservedRunningTime="2026-02-03 06:06:50.79453725 +0000 UTC m=+381.377228724" Feb 03 06:06:50 crc kubenswrapper[4872]: I0203 06:06:50.846667 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7xf8z"] Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.271314 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.272363 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.272435 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.273329 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ace9d226a8cd5ac75a2cd87f022b584a0cdb93cf609db2260d7f1894dd4aabf"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.273434 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://2ace9d226a8cd5ac75a2cd87f022b584a0cdb93cf609db2260d7f1894dd4aabf" gracePeriod=600 Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.960830 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="2ace9d226a8cd5ac75a2cd87f022b584a0cdb93cf609db2260d7f1894dd4aabf" exitCode=0 Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.961167 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"2ace9d226a8cd5ac75a2cd87f022b584a0cdb93cf609db2260d7f1894dd4aabf"} Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.961199 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"56251d9aee6a397b40abf3eed474b8f789488d08ca0bb0ba1783f4b2052f93f8"} Feb 03 06:07:01 crc kubenswrapper[4872]: I0203 06:07:01.961219 4872 scope.go:117] "RemoveContainer" containerID="83f8ab527ab836120c7b9382e9b4f1f775b042a4ba52d2c088c67b95599560e7" Feb 03 06:07:15 crc kubenswrapper[4872]: I0203 06:07:15.921420 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" podUID="7976c56b-b1e2-432b-9abb-d88e6483c5bc" containerName="registry" containerID="cri-o://b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde" gracePeriod=30 Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.408461 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.563563 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.563671 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-bound-sa-token\") pod \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.563767 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7976c56b-b1e2-432b-9abb-d88e6483c5bc-installation-pull-secrets\") pod \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.563891 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb767\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-kube-api-access-nb767\") pod \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.563922 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7976c56b-b1e2-432b-9abb-d88e6483c5bc-ca-trust-extracted\") pod \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.563945 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-tls\") pod \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.563990 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-certificates\") pod \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.564011 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-trusted-ca\") pod \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\" (UID: \"7976c56b-b1e2-432b-9abb-d88e6483c5bc\") " Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.565416 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7976c56b-b1e2-432b-9abb-d88e6483c5bc" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.566472 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7976c56b-b1e2-432b-9abb-d88e6483c5bc" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.570992 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7976c56b-b1e2-432b-9abb-d88e6483c5bc-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7976c56b-b1e2-432b-9abb-d88e6483c5bc" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.571177 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7976c56b-b1e2-432b-9abb-d88e6483c5bc" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.572883 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7976c56b-b1e2-432b-9abb-d88e6483c5bc" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.574779 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-kube-api-access-nb767" (OuterVolumeSpecName: "kube-api-access-nb767") pod "7976c56b-b1e2-432b-9abb-d88e6483c5bc" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc"). InnerVolumeSpecName "kube-api-access-nb767". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.577781 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7976c56b-b1e2-432b-9abb-d88e6483c5bc" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.599641 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7976c56b-b1e2-432b-9abb-d88e6483c5bc-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7976c56b-b1e2-432b-9abb-d88e6483c5bc" (UID: "7976c56b-b1e2-432b-9abb-d88e6483c5bc"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.665338 4872 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7976c56b-b1e2-432b-9abb-d88e6483c5bc-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.665391 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb767\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-kube-api-access-nb767\") on node \"crc\" DevicePath \"\"" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.665414 4872 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.665510 4872 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7976c56b-b1e2-432b-9abb-d88e6483c5bc-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.665532 4872 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.665550 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7976c56b-b1e2-432b-9abb-d88e6483c5bc-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:07:16 crc kubenswrapper[4872]: I0203 06:07:16.665570 4872 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7976c56b-b1e2-432b-9abb-d88e6483c5bc-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.063750 4872 generic.go:334] "Generic (PLEG): container finished" podID="7976c56b-b1e2-432b-9abb-d88e6483c5bc" containerID="b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde" exitCode=0 Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.063819 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" event={"ID":"7976c56b-b1e2-432b-9abb-d88e6483c5bc","Type":"ContainerDied","Data":"b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde"} Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.063868 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" event={"ID":"7976c56b-b1e2-432b-9abb-d88e6483c5bc","Type":"ContainerDied","Data":"0704f618eddf0203f546e9391169d0a9e1587b9baaffbc2d7f8325aa5cfc784c"} Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.063889 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7xf8z" Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.063896 4872 scope.go:117] "RemoveContainer" containerID="b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde" Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.095269 4872 scope.go:117] "RemoveContainer" containerID="b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde" Feb 03 06:07:17 crc kubenswrapper[4872]: E0203 06:07:17.095968 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde\": container with ID starting with b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde not found: ID does not exist" containerID="b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde" Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.096034 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde"} err="failed to get container status \"b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde\": rpc error: code = NotFound desc = could not find container \"b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde\": container with ID starting with b22abbf9014b375f8c361f4bf803427a6fb6c27ab681a08a3198cbe8aebd1dde not found: ID does not exist" Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.120281 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7xf8z"] Feb 03 06:07:17 crc kubenswrapper[4872]: I0203 06:07:17.130778 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7xf8z"] Feb 03 06:07:18 crc kubenswrapper[4872]: I0203 06:07:18.134999 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7976c56b-b1e2-432b-9abb-d88e6483c5bc" path="/var/lib/kubelet/pods/7976c56b-b1e2-432b-9abb-d88e6483c5bc/volumes" Feb 03 06:09:01 crc kubenswrapper[4872]: I0203 06:09:01.271594 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:09:01 crc kubenswrapper[4872]: I0203 06:09:01.272604 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:09:31 crc kubenswrapper[4872]: I0203 06:09:31.271293 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:09:31 crc kubenswrapper[4872]: I0203 06:09:31.272024 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:10:01 crc kubenswrapper[4872]: I0203 06:10:01.271169 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:10:01 crc kubenswrapper[4872]: I0203 06:10:01.271873 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:10:01 crc kubenswrapper[4872]: I0203 06:10:01.271936 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:10:01 crc kubenswrapper[4872]: I0203 06:10:01.272778 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56251d9aee6a397b40abf3eed474b8f789488d08ca0bb0ba1783f4b2052f93f8"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:10:01 crc kubenswrapper[4872]: I0203 06:10:01.272863 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://56251d9aee6a397b40abf3eed474b8f789488d08ca0bb0ba1783f4b2052f93f8" gracePeriod=600 Feb 03 06:10:02 crc kubenswrapper[4872]: I0203 06:10:02.190010 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="56251d9aee6a397b40abf3eed474b8f789488d08ca0bb0ba1783f4b2052f93f8" exitCode=0 Feb 03 06:10:02 crc kubenswrapper[4872]: I0203 06:10:02.190069 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"56251d9aee6a397b40abf3eed474b8f789488d08ca0bb0ba1783f4b2052f93f8"} Feb 03 06:10:02 crc kubenswrapper[4872]: I0203 06:10:02.190775 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"e60bd766edb6c61ff425105b834c7b74cd5da1d6ff6ffe6f2e66cc4bbe2ff323"} Feb 03 06:10:02 crc kubenswrapper[4872]: I0203 06:10:02.190812 4872 scope.go:117] "RemoveContainer" containerID="2ace9d226a8cd5ac75a2cd87f022b584a0cdb93cf609db2260d7f1894dd4aabf" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.582609 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc"] Feb 03 06:11:06 crc kubenswrapper[4872]: E0203 06:11:06.583365 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7976c56b-b1e2-432b-9abb-d88e6483c5bc" containerName="registry" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.583377 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7976c56b-b1e2-432b-9abb-d88e6483c5bc" containerName="registry" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.583473 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="7976c56b-b1e2-432b-9abb-d88e6483c5bc" containerName="registry" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.583800 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.586794 4872 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-q8lbs" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.587055 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.587481 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.625524 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-x86ch"] Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.626609 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.629116 4872 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-k585v" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.633941 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc"] Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.640760 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vnbj8"] Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.641459 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vnbj8" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.642981 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-x86ch"] Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.643450 4872 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n9lzn" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.655582 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vnbj8"] Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.698918 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9skk\" (UniqueName: \"kubernetes.io/projected/93180a09-56e7-468c-9181-e88473627564-kube-api-access-g9skk\") pod \"cert-manager-cainjector-cf98fcc89-6kgzc\" (UID: \"93180a09-56e7-468c-9181-e88473627564\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.699165 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8tl\" (UniqueName: \"kubernetes.io/projected/958bf8fc-d47f-45ff-b237-64ea37f16e2d-kube-api-access-jg8tl\") pod \"cert-manager-webhook-687f57d79b-x86ch\" (UID: \"958bf8fc-d47f-45ff-b237-64ea37f16e2d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.699269 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktjqx\" (UniqueName: \"kubernetes.io/projected/d2894283-ae8b-4bb5-a0d0-825d14b8a2bc-kube-api-access-ktjqx\") pod \"cert-manager-858654f9db-vnbj8\" (UID: \"d2894283-ae8b-4bb5-a0d0-825d14b8a2bc\") " pod="cert-manager/cert-manager-858654f9db-vnbj8" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.800236 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8tl\" (UniqueName: \"kubernetes.io/projected/958bf8fc-d47f-45ff-b237-64ea37f16e2d-kube-api-access-jg8tl\") pod \"cert-manager-webhook-687f57d79b-x86ch\" (UID: \"958bf8fc-d47f-45ff-b237-64ea37f16e2d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.800334 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktjqx\" (UniqueName: \"kubernetes.io/projected/d2894283-ae8b-4bb5-a0d0-825d14b8a2bc-kube-api-access-ktjqx\") pod \"cert-manager-858654f9db-vnbj8\" (UID: \"d2894283-ae8b-4bb5-a0d0-825d14b8a2bc\") " pod="cert-manager/cert-manager-858654f9db-vnbj8" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.800373 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9skk\" (UniqueName: \"kubernetes.io/projected/93180a09-56e7-468c-9181-e88473627564-kube-api-access-g9skk\") pod \"cert-manager-cainjector-cf98fcc89-6kgzc\" (UID: \"93180a09-56e7-468c-9181-e88473627564\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.819876 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8tl\" (UniqueName: \"kubernetes.io/projected/958bf8fc-d47f-45ff-b237-64ea37f16e2d-kube-api-access-jg8tl\") pod \"cert-manager-webhook-687f57d79b-x86ch\" (UID: \"958bf8fc-d47f-45ff-b237-64ea37f16e2d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.822882 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktjqx\" (UniqueName: \"kubernetes.io/projected/d2894283-ae8b-4bb5-a0d0-825d14b8a2bc-kube-api-access-ktjqx\") pod \"cert-manager-858654f9db-vnbj8\" (UID: \"d2894283-ae8b-4bb5-a0d0-825d14b8a2bc\") " pod="cert-manager/cert-manager-858654f9db-vnbj8" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.826175 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9skk\" (UniqueName: \"kubernetes.io/projected/93180a09-56e7-468c-9181-e88473627564-kube-api-access-g9skk\") pod \"cert-manager-cainjector-cf98fcc89-6kgzc\" (UID: \"93180a09-56e7-468c-9181-e88473627564\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.907963 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.947968 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" Feb 03 06:11:06 crc kubenswrapper[4872]: I0203 06:11:06.963434 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vnbj8" Feb 03 06:11:07 crc kubenswrapper[4872]: I0203 06:11:07.196824 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc"] Feb 03 06:11:07 crc kubenswrapper[4872]: W0203 06:11:07.203562 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93180a09_56e7_468c_9181_e88473627564.slice/crio-3430960d98fadaff94037fc233f07b5a86f1fd8214e85afb0da8e0f1dbd7b26f WatchSource:0}: Error finding container 3430960d98fadaff94037fc233f07b5a86f1fd8214e85afb0da8e0f1dbd7b26f: Status 404 returned error can't find the container with id 3430960d98fadaff94037fc233f07b5a86f1fd8214e85afb0da8e0f1dbd7b26f Feb 03 06:11:07 crc kubenswrapper[4872]: I0203 06:11:07.206150 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:11:07 crc kubenswrapper[4872]: I0203 06:11:07.244490 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-x86ch"] Feb 03 06:11:07 crc kubenswrapper[4872]: I0203 06:11:07.278345 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vnbj8"] Feb 03 06:11:07 crc kubenswrapper[4872]: W0203 06:11:07.280190 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2894283_ae8b_4bb5_a0d0_825d14b8a2bc.slice/crio-48570c3c3444ea978bc2f953a4b300bd1baa14800e69f53a839a00905c1faf6b WatchSource:0}: Error finding container 48570c3c3444ea978bc2f953a4b300bd1baa14800e69f53a839a00905c1faf6b: Status 404 returned error can't find the container with id 48570c3c3444ea978bc2f953a4b300bd1baa14800e69f53a839a00905c1faf6b Feb 03 06:11:07 crc kubenswrapper[4872]: I0203 06:11:07.665922 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" event={"ID":"958bf8fc-d47f-45ff-b237-64ea37f16e2d","Type":"ContainerStarted","Data":"23decb8e2609260e0a9615ce94f6df57cb758f5f5e725071e9b61ad440bf3d3b"} Feb 03 06:11:07 crc kubenswrapper[4872]: I0203 06:11:07.667896 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc" event={"ID":"93180a09-56e7-468c-9181-e88473627564","Type":"ContainerStarted","Data":"3430960d98fadaff94037fc233f07b5a86f1fd8214e85afb0da8e0f1dbd7b26f"} Feb 03 06:11:07 crc kubenswrapper[4872]: I0203 06:11:07.669649 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vnbj8" event={"ID":"d2894283-ae8b-4bb5-a0d0-825d14b8a2bc","Type":"ContainerStarted","Data":"48570c3c3444ea978bc2f953a4b300bd1baa14800e69f53a839a00905c1faf6b"} Feb 03 06:11:12 crc kubenswrapper[4872]: I0203 06:11:12.703018 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc" event={"ID":"93180a09-56e7-468c-9181-e88473627564","Type":"ContainerStarted","Data":"2d291bb756cb6c4ae4dd95a2001688d7642eadb20c87cad6daebced4eabb556d"} Feb 03 06:11:12 crc kubenswrapper[4872]: I0203 06:11:12.705366 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vnbj8" event={"ID":"d2894283-ae8b-4bb5-a0d0-825d14b8a2bc","Type":"ContainerStarted","Data":"7be1511dcdc89de2259958861463fe26ae774dc9c23a72c16d872c3ddd687356"} Feb 03 06:11:12 crc kubenswrapper[4872]: I0203 06:11:12.707927 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" event={"ID":"958bf8fc-d47f-45ff-b237-64ea37f16e2d","Type":"ContainerStarted","Data":"386ec6addd39637c6e220ca43549006dc944b262f449c1a5be3c5eafb933d3c9"} Feb 03 06:11:12 crc kubenswrapper[4872]: I0203 06:11:12.708117 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" Feb 03 06:11:12 crc kubenswrapper[4872]: I0203 06:11:12.723416 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-6kgzc" podStartSLOduration=2.34883013 podStartE2EDuration="6.723391562s" podCreationTimestamp="2026-02-03 06:11:06 +0000 UTC" firstStartedPulling="2026-02-03 06:11:07.205957516 +0000 UTC m=+637.788648930" lastFinishedPulling="2026-02-03 06:11:11.580518948 +0000 UTC m=+642.163210362" observedRunningTime="2026-02-03 06:11:12.719029728 +0000 UTC m=+643.301721182" watchObservedRunningTime="2026-02-03 06:11:12.723391562 +0000 UTC m=+643.306083016" Feb 03 06:11:12 crc kubenswrapper[4872]: I0203 06:11:12.762661 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vnbj8" podStartSLOduration=2.454507201 podStartE2EDuration="6.762633045s" podCreationTimestamp="2026-02-03 06:11:06 +0000 UTC" firstStartedPulling="2026-02-03 06:11:07.282676315 +0000 UTC m=+637.865367729" lastFinishedPulling="2026-02-03 06:11:11.590802149 +0000 UTC m=+642.173493573" observedRunningTime="2026-02-03 06:11:12.761460039 +0000 UTC m=+643.344151503" watchObservedRunningTime="2026-02-03 06:11:12.762633045 +0000 UTC m=+643.345324499" Feb 03 06:11:12 crc kubenswrapper[4872]: I0203 06:11:12.790638 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" podStartSLOduration=2.434135693 podStartE2EDuration="6.790613566s" podCreationTimestamp="2026-02-03 06:11:06 +0000 UTC" firstStartedPulling="2026-02-03 06:11:07.248761096 +0000 UTC m=+637.831452510" lastFinishedPulling="2026-02-03 06:11:11.605238959 +0000 UTC m=+642.187930383" observedRunningTime="2026-02-03 06:11:12.789337359 +0000 UTC m=+643.372028813" watchObservedRunningTime="2026-02-03 06:11:12.790613566 +0000 UTC m=+643.373305020" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.519389 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jbpgt"] Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.520770 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="northd" containerID="cri-o://fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2" gracePeriod=30 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.520810 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovn-controller" containerID="cri-o://f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54" gracePeriod=30 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.520897 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="sbdb" containerID="cri-o://e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" gracePeriod=30 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.520896 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kube-rbac-proxy-node" containerID="cri-o://6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06" gracePeriod=30 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.520956 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovn-acl-logging" containerID="cri-o://effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a" gracePeriod=30 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.521094 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a" gracePeriod=30 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.520832 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="nbdb" containerID="cri-o://077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" gracePeriod=30 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.628287 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" containerID="cri-o://6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5" gracePeriod=30 Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.733662 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.733734 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.737191 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/3.log" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.739043 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovn-acl-logging/0.log" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.740998 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovn-controller/0.log" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.741403 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a" exitCode=0 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.741519 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06" exitCode=0 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.741574 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a" exitCode=143 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.741623 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54" exitCode=143 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.741728 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a"} Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.741825 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06"} Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.741881 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a"} Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.741954 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54"} Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.744788 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/2.log" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.745330 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/1.log" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.745419 4872 generic.go:334] "Generic (PLEG): container finished" podID="db59aed5-04bc-4793-8938-196aace29feb" containerID="1648ba316b90acc20185dae52c795ff2915e2ad6ae7077f5dacd0dc1bdbd67db" exitCode=2 Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.745502 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2f65" event={"ID":"db59aed5-04bc-4793-8938-196aace29feb","Type":"ContainerDied","Data":"1648ba316b90acc20185dae52c795ff2915e2ad6ae7077f5dacd0dc1bdbd67db"} Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.745574 4872 scope.go:117] "RemoveContainer" containerID="6b703cc0369647a33af3dc690f8aa99fa67a5fe918e1cb5de251d70ae4282936" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.746471 4872 scope.go:117] "RemoveContainer" containerID="1648ba316b90acc20185dae52c795ff2915e2ad6ae7077f5dacd0dc1bdbd67db" Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.746901 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-g2f65_openshift-multus(db59aed5-04bc-4793-8938-196aace29feb)\"" pod="openshift-multus/multus-g2f65" podUID="db59aed5-04bc-4793-8938-196aace29feb" Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.750351 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.753210 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.755645 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.755709 4872 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="sbdb" Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.755821 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 03 06:11:16 crc kubenswrapper[4872]: E0203 06:11:16.755841 4872 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="nbdb" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.953004 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-x86ch" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.979763 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/3.log" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.985614 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovn-acl-logging/0.log" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.986147 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovn-controller/0.log" Feb 03 06:11:16 crc kubenswrapper[4872]: I0203 06:11:16.986561 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057388 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtncz"] Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057622 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="nbdb" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057636 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="nbdb" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057651 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057660 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057669 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kube-rbac-proxy-node" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057705 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kube-rbac-proxy-node" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057720 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovn-acl-logging" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057728 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovn-acl-logging" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057740 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kubecfg-setup" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057762 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kubecfg-setup" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057775 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="northd" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057782 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="northd" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057791 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057797 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057805 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057812 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057821 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057829 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057842 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057851 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057860 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovn-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057868 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovn-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.057882 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="sbdb" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.057889 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="sbdb" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058009 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058025 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058035 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="northd" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058046 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovn-acl-logging" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058054 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="sbdb" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058065 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kube-rbac-proxy-node" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058074 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058083 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovn-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058093 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058101 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="nbdb" Feb 03 06:11:17 crc kubenswrapper[4872]: E0203 06:11:17.058208 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058218 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058324 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.058522 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerName="ovnkube-controller" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.060093 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.168932 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-systemd\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.168994 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs9hh\" (UniqueName: \"kubernetes.io/projected/dafd73bb-7642-409c-9ea2-f6dbc002067f-kube-api-access-bs9hh\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169028 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-node-log\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169056 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-ovn-kubernetes\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169082 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-netns\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169132 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-systemd-units\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169166 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-slash\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169210 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-script-lib\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169238 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-env-overrides\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169259 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-config\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169286 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-bin\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169306 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-log-socket\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169335 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169356 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-openvswitch\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169377 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovn-node-metrics-cert\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169397 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-netd\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169415 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-var-lib-openvswitch\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169437 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-ovn\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169457 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-kubelet\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169474 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-etc-openvswitch\") pod \"dafd73bb-7642-409c-9ea2-f6dbc002067f\" (UID: \"dafd73bb-7642-409c-9ea2-f6dbc002067f\") " Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169604 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169642 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-etc-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169667 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovn-node-metrics-cert\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169714 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovnkube-config\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169744 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169768 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-slash\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169789 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-kubelet\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169810 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-systemd-units\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169831 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-run-netns\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169852 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-ovn\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169875 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-log-socket\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169893 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-env-overrides\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169931 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169952 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-systemd\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169972 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-var-lib-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169992 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qnb\" (UniqueName: \"kubernetes.io/projected/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-kube-api-access-s8qnb\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.170015 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-cni-bin\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.170040 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovnkube-script-lib\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.170062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-cni-netd\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.170088 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-node-log\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.169159 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-node-log" (OuterVolumeSpecName: "node-log") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.170739 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.170761 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.170778 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.170794 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-slash" (OuterVolumeSpecName: "host-slash") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.171359 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.171913 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.171934 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.171964 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-log-socket" (OuterVolumeSpecName: "log-socket") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.171930 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.171965 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.172020 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.172018 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.172046 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.172127 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.172212 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.172984 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.176219 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafd73bb-7642-409c-9ea2-f6dbc002067f-kube-api-access-bs9hh" (OuterVolumeSpecName: "kube-api-access-bs9hh") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "kube-api-access-bs9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.177461 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.184205 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "dafd73bb-7642-409c-9ea2-f6dbc002067f" (UID: "dafd73bb-7642-409c-9ea2-f6dbc002067f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.271976 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-cni-bin\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272072 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovnkube-script-lib\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272101 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-cni-bin\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272190 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-cni-netd\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272234 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-cni-netd\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272233 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-node-log\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272292 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-node-log\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272316 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272353 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-etc-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272375 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovn-node-metrics-cert\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272393 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovnkube-config\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272424 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272444 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-kubelet\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272449 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-etc-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272458 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-slash\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272477 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-slash\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272507 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-systemd-units\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272540 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-run-netns\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272571 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-ovn\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272607 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-log-socket\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272638 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-env-overrides\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272728 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272763 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-systemd\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272775 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovnkube-script-lib\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272798 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-var-lib-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272830 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272838 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qnb\" (UniqueName: \"kubernetes.io/projected/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-kube-api-access-s8qnb\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272908 4872 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.272928 4872 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-slash\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.274480 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.274758 4872 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.274782 4872 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.274803 4872 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273120 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-log-socket\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.274821 4872 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-log-socket\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.274841 4872 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.275779 4872 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.275924 4872 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dafd73bb-7642-409c-9ea2-f6dbc002067f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.275980 4872 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276001 4872 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276022 4872 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276039 4872 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273292 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-var-lib-openvswitch\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276057 4872 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273190 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-run-netns\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276091 4872 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273170 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-systemd-units\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273521 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-env-overrides\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273233 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovnkube-config\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273083 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-kubelet\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273271 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-systemd\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276112 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs9hh\" (UniqueName: \"kubernetes.io/projected/dafd73bb-7642-409c-9ea2-f6dbc002067f-kube-api-access-bs9hh\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276230 4872 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-node-log\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276273 4872 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273211 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-run-ovn\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.276305 4872 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dafd73bb-7642-409c-9ea2-f6dbc002067f-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273145 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.273243 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.278344 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-ovn-node-metrics-cert\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.288256 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qnb\" (UniqueName: \"kubernetes.io/projected/1eea1c47-892b-43a8-9775-9ed0ae3d23e9-kube-api-access-s8qnb\") pod \"ovnkube-node-dtncz\" (UID: \"1eea1c47-892b-43a8-9775-9ed0ae3d23e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.379974 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.763667 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/2.log" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.766222 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"13702236eb4e20369cc935aa8140d7d163141a3c12de2b22a2338f4848317be5"} Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.770405 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovnkube-controller/3.log" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.773851 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovn-acl-logging/0.log" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.774358 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jbpgt_dafd73bb-7642-409c-9ea2-f6dbc002067f/ovn-controller/0.log" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.774786 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5" exitCode=0 Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.774879 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" exitCode=0 Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.774934 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" exitCode=0 Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.774988 4872 generic.go:334] "Generic (PLEG): container finished" podID="dafd73bb-7642-409c-9ea2-f6dbc002067f" containerID="fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2" exitCode=0 Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.775060 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5"} Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.775148 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea"} Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.775207 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff"} Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.775262 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2"} Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.775326 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" event={"ID":"dafd73bb-7642-409c-9ea2-f6dbc002067f","Type":"ContainerDied","Data":"29fa1815ceb949fd26f698e03328b8ccc01cecd9374a91eaf6daecb13e66f52e"} Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.775397 4872 scope.go:117] "RemoveContainer" containerID="6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.775701 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jbpgt" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.818293 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.822342 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jbpgt"] Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.825748 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jbpgt"] Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.839698 4872 scope.go:117] "RemoveContainer" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.857213 4872 scope.go:117] "RemoveContainer" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.868486 4872 scope.go:117] "RemoveContainer" containerID="fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.895150 4872 scope.go:117] "RemoveContainer" containerID="355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.943644 4872 scope.go:117] "RemoveContainer" containerID="6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.957517 4872 scope.go:117] "RemoveContainer" containerID="effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.969656 4872 scope.go:117] "RemoveContainer" containerID="f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54" Feb 03 06:11:17 crc kubenswrapper[4872]: I0203 06:11:17.983576 4872 scope.go:117] "RemoveContainer" containerID="8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.001997 4872 scope.go:117] "RemoveContainer" containerID="6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.003057 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": container with ID starting with 6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5 not found: ID does not exist" containerID="6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.003128 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5"} err="failed to get container status \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": rpc error: code = NotFound desc = could not find container \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": container with ID starting with 6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.003161 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.003644 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": container with ID starting with b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9 not found: ID does not exist" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.003726 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9"} err="failed to get container status \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": rpc error: code = NotFound desc = could not find container \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": container with ID starting with b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.003776 4872 scope.go:117] "RemoveContainer" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.004398 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": container with ID starting with e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea not found: ID does not exist" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.004428 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea"} err="failed to get container status \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": rpc error: code = NotFound desc = could not find container \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": container with ID starting with e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.004444 4872 scope.go:117] "RemoveContainer" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.004884 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": container with ID starting with 077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff not found: ID does not exist" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.004921 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff"} err="failed to get container status \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": rpc error: code = NotFound desc = could not find container \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": container with ID starting with 077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.004968 4872 scope.go:117] "RemoveContainer" containerID="fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.005429 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": container with ID starting with fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2 not found: ID does not exist" containerID="fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.005475 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2"} err="failed to get container status \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": rpc error: code = NotFound desc = could not find container \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": container with ID starting with fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.005493 4872 scope.go:117] "RemoveContainer" containerID="355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.006103 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": container with ID starting with 355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a not found: ID does not exist" containerID="355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.006165 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a"} err="failed to get container status \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": rpc error: code = NotFound desc = could not find container \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": container with ID starting with 355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.006187 4872 scope.go:117] "RemoveContainer" containerID="6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.006479 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": container with ID starting with 6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06 not found: ID does not exist" containerID="6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.006519 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06"} err="failed to get container status \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": rpc error: code = NotFound desc = could not find container \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": container with ID starting with 6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.006542 4872 scope.go:117] "RemoveContainer" containerID="effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.007100 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": container with ID starting with effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a not found: ID does not exist" containerID="effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.007132 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a"} err="failed to get container status \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": rpc error: code = NotFound desc = could not find container \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": container with ID starting with effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.007176 4872 scope.go:117] "RemoveContainer" containerID="f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.007467 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": container with ID starting with f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54 not found: ID does not exist" containerID="f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.007519 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54"} err="failed to get container status \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": rpc error: code = NotFound desc = could not find container \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": container with ID starting with f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.007545 4872 scope.go:117] "RemoveContainer" containerID="8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda" Feb 03 06:11:18 crc kubenswrapper[4872]: E0203 06:11:18.008234 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": container with ID starting with 8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda not found: ID does not exist" containerID="8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.008266 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda"} err="failed to get container status \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": rpc error: code = NotFound desc = could not find container \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": container with ID starting with 8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.008289 4872 scope.go:117] "RemoveContainer" containerID="6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.008838 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5"} err="failed to get container status \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": rpc error: code = NotFound desc = could not find container \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": container with ID starting with 6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.008861 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.009860 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9"} err="failed to get container status \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": rpc error: code = NotFound desc = could not find container \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": container with ID starting with b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.009888 4872 scope.go:117] "RemoveContainer" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.010723 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea"} err="failed to get container status \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": rpc error: code = NotFound desc = could not find container \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": container with ID starting with e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.010753 4872 scope.go:117] "RemoveContainer" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.011149 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff"} err="failed to get container status \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": rpc error: code = NotFound desc = could not find container \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": container with ID starting with 077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.011195 4872 scope.go:117] "RemoveContainer" containerID="fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.011737 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2"} err="failed to get container status \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": rpc error: code = NotFound desc = could not find container \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": container with ID starting with fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.011761 4872 scope.go:117] "RemoveContainer" containerID="355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.012209 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a"} err="failed to get container status \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": rpc error: code = NotFound desc = could not find container \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": container with ID starting with 355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.012255 4872 scope.go:117] "RemoveContainer" containerID="6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.012646 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06"} err="failed to get container status \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": rpc error: code = NotFound desc = could not find container \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": container with ID starting with 6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.012676 4872 scope.go:117] "RemoveContainer" containerID="effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.013129 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a"} err="failed to get container status \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": rpc error: code = NotFound desc = could not find container \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": container with ID starting with effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.013154 4872 scope.go:117] "RemoveContainer" containerID="f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.013560 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54"} err="failed to get container status \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": rpc error: code = NotFound desc = could not find container \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": container with ID starting with f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.013589 4872 scope.go:117] "RemoveContainer" containerID="8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.014061 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda"} err="failed to get container status \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": rpc error: code = NotFound desc = could not find container \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": container with ID starting with 8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.014086 4872 scope.go:117] "RemoveContainer" containerID="6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.014343 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5"} err="failed to get container status \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": rpc error: code = NotFound desc = could not find container \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": container with ID starting with 6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.014368 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.014723 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9"} err="failed to get container status \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": rpc error: code = NotFound desc = could not find container \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": container with ID starting with b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.014749 4872 scope.go:117] "RemoveContainer" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.015241 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea"} err="failed to get container status \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": rpc error: code = NotFound desc = could not find container \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": container with ID starting with e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.015321 4872 scope.go:117] "RemoveContainer" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.015778 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff"} err="failed to get container status \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": rpc error: code = NotFound desc = could not find container \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": container with ID starting with 077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.015801 4872 scope.go:117] "RemoveContainer" containerID="fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.016130 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2"} err="failed to get container status \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": rpc error: code = NotFound desc = could not find container \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": container with ID starting with fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.016153 4872 scope.go:117] "RemoveContainer" containerID="355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.016397 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a"} err="failed to get container status \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": rpc error: code = NotFound desc = could not find container \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": container with ID starting with 355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.016421 4872 scope.go:117] "RemoveContainer" containerID="6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.016740 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06"} err="failed to get container status \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": rpc error: code = NotFound desc = could not find container \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": container with ID starting with 6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.016766 4872 scope.go:117] "RemoveContainer" containerID="effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.017168 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a"} err="failed to get container status \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": rpc error: code = NotFound desc = could not find container \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": container with ID starting with effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.017195 4872 scope.go:117] "RemoveContainer" containerID="f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.017621 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54"} err="failed to get container status \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": rpc error: code = NotFound desc = could not find container \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": container with ID starting with f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.017644 4872 scope.go:117] "RemoveContainer" containerID="8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.018428 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda"} err="failed to get container status \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": rpc error: code = NotFound desc = could not find container \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": container with ID starting with 8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.018452 4872 scope.go:117] "RemoveContainer" containerID="6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.018758 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5"} err="failed to get container status \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": rpc error: code = NotFound desc = could not find container \"6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5\": container with ID starting with 6f54ff89001043bdef5d3333fd3d13e4a4cd4415f1ec82c552fee8f5c262f1e5 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.018780 4872 scope.go:117] "RemoveContainer" containerID="b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.019962 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9"} err="failed to get container status \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": rpc error: code = NotFound desc = could not find container \"b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9\": container with ID starting with b06d29914930ed3cc1c9fe9c3cff5d852f26f8966b8ea43cb2ef84cdd2783be9 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.019986 4872 scope.go:117] "RemoveContainer" containerID="e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.020369 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea"} err="failed to get container status \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": rpc error: code = NotFound desc = could not find container \"e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea\": container with ID starting with e2cf328935ad0a7df6dd20c736648a246d9026d8b915192ad4e6ec8502022dea not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.020400 4872 scope.go:117] "RemoveContainer" containerID="077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.021100 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff"} err="failed to get container status \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": rpc error: code = NotFound desc = could not find container \"077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff\": container with ID starting with 077d64b6fedd1be65e35b3673541ff954c540d7d257b6da4da38fbf11a6637ff not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.021165 4872 scope.go:117] "RemoveContainer" containerID="fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.021771 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2"} err="failed to get container status \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": rpc error: code = NotFound desc = could not find container \"fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2\": container with ID starting with fae705c6670771a541e1cb1cfce12c4fafb9ff10d2a79a5ed1e0b67f56ed2db2 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.021793 4872 scope.go:117] "RemoveContainer" containerID="355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.022323 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a"} err="failed to get container status \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": rpc error: code = NotFound desc = could not find container \"355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a\": container with ID starting with 355cdb1cb468491643110f44ac0b73ac210074b45ac9649c20cd5319b91cc90a not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.022381 4872 scope.go:117] "RemoveContainer" containerID="6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.022767 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06"} err="failed to get container status \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": rpc error: code = NotFound desc = could not find container \"6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06\": container with ID starting with 6ce1c153bfc91ec4b205d9ca54309060ac832ae46ba5749b8a986de8309d2d06 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.022796 4872 scope.go:117] "RemoveContainer" containerID="effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.023048 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a"} err="failed to get container status \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": rpc error: code = NotFound desc = could not find container \"effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a\": container with ID starting with effaf75a6485c91f830491401d92d8641f7fde31e3860243bd34969e1089325a not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.023088 4872 scope.go:117] "RemoveContainer" containerID="f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.023452 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54"} err="failed to get container status \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": rpc error: code = NotFound desc = could not find container \"f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54\": container with ID starting with f77f9c0e17e309422baaf1fd31fd19ef723527812f1b69bfdf2d6f1477488e54 not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.023496 4872 scope.go:117] "RemoveContainer" containerID="8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.023852 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda"} err="failed to get container status \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": rpc error: code = NotFound desc = could not find container \"8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda\": container with ID starting with 8fffd3e115f36d4d166b600ff730d41155a75652ddefc0590ff22f4ddaff8dda not found: ID does not exist" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.130056 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dafd73bb-7642-409c-9ea2-f6dbc002067f" path="/var/lib/kubelet/pods/dafd73bb-7642-409c-9ea2-f6dbc002067f/volumes" Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.781940 4872 generic.go:334] "Generic (PLEG): container finished" podID="1eea1c47-892b-43a8-9775-9ed0ae3d23e9" containerID="a9194e11a996956de16eb2477b64b60d5643a9523bb9c14d314786561c31fad3" exitCode=0 Feb 03 06:11:18 crc kubenswrapper[4872]: I0203 06:11:18.782162 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerDied","Data":"a9194e11a996956de16eb2477b64b60d5643a9523bb9c14d314786561c31fad3"} Feb 03 06:11:19 crc kubenswrapper[4872]: I0203 06:11:19.810516 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"7ce6a0fbb725290f9b9ed14b07b3dd6db25876e33339572834804f09b823c683"} Feb 03 06:11:19 crc kubenswrapper[4872]: I0203 06:11:19.811109 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"1aeb1ad4c296ad9f95ee3fb16e604554ff6b42b087dcbbf772d052643746688b"} Feb 03 06:11:19 crc kubenswrapper[4872]: I0203 06:11:19.811124 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"c6bee31c85fcf3efa368ce3f26147cbd52583a8f35374ffe9903d404d595a45f"} Feb 03 06:11:19 crc kubenswrapper[4872]: I0203 06:11:19.811145 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"1373e19345a1de2d72175dfd54dd9877a6f78415ba9238e436fb81504b3d5121"} Feb 03 06:11:19 crc kubenswrapper[4872]: I0203 06:11:19.811160 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"062943a5d5a730b3bc6a7a20901ae76eaef564bb3e0a1dcf37047ce32230cf1e"} Feb 03 06:11:19 crc kubenswrapper[4872]: I0203 06:11:19.811177 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"fc2e10d1e92d49f5bd32d647dfcd3e17833e7de892fcdeea33704c33ce39b4b8"} Feb 03 06:11:22 crc kubenswrapper[4872]: I0203 06:11:22.845274 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"fdd7b00c73454877c5197b323d5ed4de14a1549ec60728f65dd5d2435c2a43f8"} Feb 03 06:11:24 crc kubenswrapper[4872]: I0203 06:11:24.863857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" event={"ID":"1eea1c47-892b-43a8-9775-9ed0ae3d23e9","Type":"ContainerStarted","Data":"7e3b10d83e097ce0c5e63af934c6a8cb26566a2b950406eb1d996e7405981ac0"} Feb 03 06:11:24 crc kubenswrapper[4872]: I0203 06:11:24.865784 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:24 crc kubenswrapper[4872]: I0203 06:11:24.896363 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" podStartSLOduration=7.8963473650000005 podStartE2EDuration="7.896347365s" podCreationTimestamp="2026-02-03 06:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:11:24.89230156 +0000 UTC m=+655.474992984" watchObservedRunningTime="2026-02-03 06:11:24.896347365 +0000 UTC m=+655.479038789" Feb 03 06:11:24 crc kubenswrapper[4872]: I0203 06:11:24.906069 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:25 crc kubenswrapper[4872]: I0203 06:11:25.870429 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:25 crc kubenswrapper[4872]: I0203 06:11:25.870490 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:25 crc kubenswrapper[4872]: I0203 06:11:25.911741 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:28 crc kubenswrapper[4872]: I0203 06:11:28.122573 4872 scope.go:117] "RemoveContainer" containerID="1648ba316b90acc20185dae52c795ff2915e2ad6ae7077f5dacd0dc1bdbd67db" Feb 03 06:11:28 crc kubenswrapper[4872]: E0203 06:11:28.123251 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-g2f65_openshift-multus(db59aed5-04bc-4793-8938-196aace29feb)\"" pod="openshift-multus/multus-g2f65" podUID="db59aed5-04bc-4793-8938-196aace29feb" Feb 03 06:11:42 crc kubenswrapper[4872]: I0203 06:11:42.122993 4872 scope.go:117] "RemoveContainer" containerID="1648ba316b90acc20185dae52c795ff2915e2ad6ae7077f5dacd0dc1bdbd67db" Feb 03 06:11:42 crc kubenswrapper[4872]: I0203 06:11:42.985028 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2f65_db59aed5-04bc-4793-8938-196aace29feb/kube-multus/2.log" Feb 03 06:11:42 crc kubenswrapper[4872]: I0203 06:11:42.985116 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2f65" event={"ID":"db59aed5-04bc-4793-8938-196aace29feb","Type":"ContainerStarted","Data":"eac09d992e7786bc09cec34fbff3a277b125f2c7bcb4cc00e62e0ef9c0b3a82e"} Feb 03 06:11:47 crc kubenswrapper[4872]: I0203 06:11:47.422152 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtncz" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.256089 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s"] Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.257519 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.265593 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s"] Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.266401 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.368199 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.368276 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqpg\" (UniqueName: \"kubernetes.io/projected/f2a64c03-cefd-4a9e-9f86-fd50865536d3-kube-api-access-6bqpg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.368318 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.469100 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqpg\" (UniqueName: \"kubernetes.io/projected/f2a64c03-cefd-4a9e-9f86-fd50865536d3-kube-api-access-6bqpg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.469164 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.469219 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.469805 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.470178 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.506396 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqpg\" (UniqueName: \"kubernetes.io/projected/f2a64c03-cefd-4a9e-9f86-fd50865536d3-kube-api-access-6bqpg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.578114 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:11:55 crc kubenswrapper[4872]: I0203 06:11:55.854788 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s"] Feb 03 06:11:56 crc kubenswrapper[4872]: I0203 06:11:56.075184 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" event={"ID":"f2a64c03-cefd-4a9e-9f86-fd50865536d3","Type":"ContainerStarted","Data":"b3b291889149278f748beedae279b29b6e9a1d1f0191762af3c8cff39658acde"} Feb 03 06:11:56 crc kubenswrapper[4872]: I0203 06:11:56.075825 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" event={"ID":"f2a64c03-cefd-4a9e-9f86-fd50865536d3","Type":"ContainerStarted","Data":"1d63533ddd19744b64ec221c50fd7466d5b0c2e473786d78acaef6f59a0d0e2e"} Feb 03 06:11:57 crc kubenswrapper[4872]: I0203 06:11:57.083235 4872 generic.go:334] "Generic (PLEG): container finished" podID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerID="b3b291889149278f748beedae279b29b6e9a1d1f0191762af3c8cff39658acde" exitCode=0 Feb 03 06:11:57 crc kubenswrapper[4872]: I0203 06:11:57.083296 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" event={"ID":"f2a64c03-cefd-4a9e-9f86-fd50865536d3","Type":"ContainerDied","Data":"b3b291889149278f748beedae279b29b6e9a1d1f0191762af3c8cff39658acde"} Feb 03 06:11:59 crc kubenswrapper[4872]: I0203 06:11:59.097905 4872 generic.go:334] "Generic (PLEG): container finished" podID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerID="5880d07aac68df8e462e9ce31fcc9898bc51f35160b3c372fc8d5b43f95391d8" exitCode=0 Feb 03 06:11:59 crc kubenswrapper[4872]: I0203 06:11:59.098012 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" event={"ID":"f2a64c03-cefd-4a9e-9f86-fd50865536d3","Type":"ContainerDied","Data":"5880d07aac68df8e462e9ce31fcc9898bc51f35160b3c372fc8d5b43f95391d8"} Feb 03 06:12:00 crc kubenswrapper[4872]: I0203 06:12:00.108113 4872 generic.go:334] "Generic (PLEG): container finished" podID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerID="82b44fab137de1d007dbcfb493fc85569ac2f18b5bf8abb148a84b8157fa610f" exitCode=0 Feb 03 06:12:00 crc kubenswrapper[4872]: I0203 06:12:00.108172 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" event={"ID":"f2a64c03-cefd-4a9e-9f86-fd50865536d3","Type":"ContainerDied","Data":"82b44fab137de1d007dbcfb493fc85569ac2f18b5bf8abb148a84b8157fa610f"} Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.275356 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.275891 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.424726 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.547601 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-bundle\") pod \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.547722 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-util\") pod \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.547778 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bqpg\" (UniqueName: \"kubernetes.io/projected/f2a64c03-cefd-4a9e-9f86-fd50865536d3-kube-api-access-6bqpg\") pod \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\" (UID: \"f2a64c03-cefd-4a9e-9f86-fd50865536d3\") " Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.549447 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-bundle" (OuterVolumeSpecName: "bundle") pod "f2a64c03-cefd-4a9e-9f86-fd50865536d3" (UID: "f2a64c03-cefd-4a9e-9f86-fd50865536d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.556639 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a64c03-cefd-4a9e-9f86-fd50865536d3-kube-api-access-6bqpg" (OuterVolumeSpecName: "kube-api-access-6bqpg") pod "f2a64c03-cefd-4a9e-9f86-fd50865536d3" (UID: "f2a64c03-cefd-4a9e-9f86-fd50865536d3"). InnerVolumeSpecName "kube-api-access-6bqpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.650101 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bqpg\" (UniqueName: \"kubernetes.io/projected/f2a64c03-cefd-4a9e-9f86-fd50865536d3-kube-api-access-6bqpg\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.650146 4872 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.829088 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-util" (OuterVolumeSpecName: "util") pod "f2a64c03-cefd-4a9e-9f86-fd50865536d3" (UID: "f2a64c03-cefd-4a9e-9f86-fd50865536d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:12:01 crc kubenswrapper[4872]: I0203 06:12:01.853885 4872 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2a64c03-cefd-4a9e-9f86-fd50865536d3-util\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:02 crc kubenswrapper[4872]: I0203 06:12:02.125579 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" Feb 03 06:12:02 crc kubenswrapper[4872]: I0203 06:12:02.136598 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s" event={"ID":"f2a64c03-cefd-4a9e-9f86-fd50865536d3","Type":"ContainerDied","Data":"1d63533ddd19744b64ec221c50fd7466d5b0c2e473786d78acaef6f59a0d0e2e"} Feb 03 06:12:02 crc kubenswrapper[4872]: I0203 06:12:02.136668 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d63533ddd19744b64ec221c50fd7466d5b0c2e473786d78acaef6f59a0d0e2e" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.908146 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jnk4k"] Feb 03 06:12:06 crc kubenswrapper[4872]: E0203 06:12:06.908559 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerName="util" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.908570 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerName="util" Feb 03 06:12:06 crc kubenswrapper[4872]: E0203 06:12:06.908581 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerName="pull" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.908588 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerName="pull" Feb 03 06:12:06 crc kubenswrapper[4872]: E0203 06:12:06.908604 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerName="extract" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.908610 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerName="extract" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.908724 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a64c03-cefd-4a9e-9f86-fd50865536d3" containerName="extract" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.909048 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jnk4k" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.911270 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.913955 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.914379 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mg4kb" Feb 03 06:12:06 crc kubenswrapper[4872]: I0203 06:12:06.929412 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jnk4k"] Feb 03 06:12:07 crc kubenswrapper[4872]: I0203 06:12:07.024511 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8bj\" (UniqueName: \"kubernetes.io/projected/0a2cf0fd-a05d-4b50-a0fa-727e373679c2-kube-api-access-pz8bj\") pod \"nmstate-operator-646758c888-jnk4k\" (UID: \"0a2cf0fd-a05d-4b50-a0fa-727e373679c2\") " pod="openshift-nmstate/nmstate-operator-646758c888-jnk4k" Feb 03 06:12:07 crc kubenswrapper[4872]: I0203 06:12:07.125412 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz8bj\" (UniqueName: \"kubernetes.io/projected/0a2cf0fd-a05d-4b50-a0fa-727e373679c2-kube-api-access-pz8bj\") pod \"nmstate-operator-646758c888-jnk4k\" (UID: \"0a2cf0fd-a05d-4b50-a0fa-727e373679c2\") " pod="openshift-nmstate/nmstate-operator-646758c888-jnk4k" Feb 03 06:12:07 crc kubenswrapper[4872]: I0203 06:12:07.151543 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz8bj\" (UniqueName: \"kubernetes.io/projected/0a2cf0fd-a05d-4b50-a0fa-727e373679c2-kube-api-access-pz8bj\") pod \"nmstate-operator-646758c888-jnk4k\" (UID: \"0a2cf0fd-a05d-4b50-a0fa-727e373679c2\") " pod="openshift-nmstate/nmstate-operator-646758c888-jnk4k" Feb 03 06:12:07 crc kubenswrapper[4872]: I0203 06:12:07.224514 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-jnk4k" Feb 03 06:12:07 crc kubenswrapper[4872]: I0203 06:12:07.443132 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-jnk4k"] Feb 03 06:12:08 crc kubenswrapper[4872]: I0203 06:12:08.160483 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jnk4k" event={"ID":"0a2cf0fd-a05d-4b50-a0fa-727e373679c2","Type":"ContainerStarted","Data":"230ba41120adc190f2d36c06a29d596615a463bdf5abea691ad661bd10711f96"} Feb 03 06:12:10 crc kubenswrapper[4872]: I0203 06:12:10.172077 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-jnk4k" event={"ID":"0a2cf0fd-a05d-4b50-a0fa-727e373679c2","Type":"ContainerStarted","Data":"f4e2869214cc4c3859f95ab7834048e6418c108961ca74281dff04e573db9061"} Feb 03 06:12:10 crc kubenswrapper[4872]: I0203 06:12:10.200947 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-jnk4k" podStartSLOduration=2.09860906 podStartE2EDuration="4.200908081s" podCreationTimestamp="2026-02-03 06:12:06 +0000 UTC" firstStartedPulling="2026-02-03 06:12:07.457778105 +0000 UTC m=+698.040469529" lastFinishedPulling="2026-02-03 06:12:09.560077096 +0000 UTC m=+700.142768550" observedRunningTime="2026-02-03 06:12:10.193755674 +0000 UTC m=+700.776447148" watchObservedRunningTime="2026-02-03 06:12:10.200908081 +0000 UTC m=+700.783599525" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.784914 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-22pzf"] Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.786953 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.789124 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xtj95" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.804545 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-22pzf"] Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.810270 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d"] Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.811477 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.818856 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.834325 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4rnzm"] Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.834974 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.841132 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d"] Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.849081 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-nmstate-lock\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.849123 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-dbus-socket\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.849139 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-ovs-socket\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.849173 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c228k\" (UniqueName: \"kubernetes.io/projected/79f465a2-4fb7-470a-83ba-ed5d98e5227b-kube-api-access-c228k\") pod \"nmstate-webhook-8474b5b9d8-l6d5d\" (UID: \"79f465a2-4fb7-470a-83ba-ed5d98e5227b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.849195 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsm7n\" (UniqueName: \"kubernetes.io/projected/d985e2b5-be7b-4e11-835f-0fbb14859743-kube-api-access-wsm7n\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.849211 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/79f465a2-4fb7-470a-83ba-ed5d98e5227b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-l6d5d\" (UID: \"79f465a2-4fb7-470a-83ba-ed5d98e5227b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.849297 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mgcf\" (UniqueName: \"kubernetes.io/projected/eecf7d0c-c77c-4bb5-9588-24b324a7848f-kube-api-access-7mgcf\") pod \"nmstate-metrics-54757c584b-22pzf\" (UID: \"eecf7d0c-c77c-4bb5-9588-24b324a7848f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.950246 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c228k\" (UniqueName: \"kubernetes.io/projected/79f465a2-4fb7-470a-83ba-ed5d98e5227b-kube-api-access-c228k\") pod \"nmstate-webhook-8474b5b9d8-l6d5d\" (UID: \"79f465a2-4fb7-470a-83ba-ed5d98e5227b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.950553 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsm7n\" (UniqueName: \"kubernetes.io/projected/d985e2b5-be7b-4e11-835f-0fbb14859743-kube-api-access-wsm7n\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.950575 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/79f465a2-4fb7-470a-83ba-ed5d98e5227b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-l6d5d\" (UID: \"79f465a2-4fb7-470a-83ba-ed5d98e5227b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.950635 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mgcf\" (UniqueName: \"kubernetes.io/projected/eecf7d0c-c77c-4bb5-9588-24b324a7848f-kube-api-access-7mgcf\") pod \"nmstate-metrics-54757c584b-22pzf\" (UID: \"eecf7d0c-c77c-4bb5-9588-24b324a7848f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.950702 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-nmstate-lock\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.950730 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-dbus-socket\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.950747 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-ovs-socket\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.950860 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-ovs-socket\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: E0203 06:12:15.951649 4872 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 03 06:12:15 crc kubenswrapper[4872]: E0203 06:12:15.951710 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79f465a2-4fb7-470a-83ba-ed5d98e5227b-tls-key-pair podName:79f465a2-4fb7-470a-83ba-ed5d98e5227b nodeName:}" failed. No retries permitted until 2026-02-03 06:12:16.45167581 +0000 UTC m=+707.034367224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/79f465a2-4fb7-470a-83ba-ed5d98e5227b-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-l6d5d" (UID: "79f465a2-4fb7-470a-83ba-ed5d98e5227b") : secret "openshift-nmstate-webhook" not found Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.951987 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-nmstate-lock\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.952226 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d985e2b5-be7b-4e11-835f-0fbb14859743-dbus-socket\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.981348 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsm7n\" (UniqueName: \"kubernetes.io/projected/d985e2b5-be7b-4e11-835f-0fbb14859743-kube-api-access-wsm7n\") pod \"nmstate-handler-4rnzm\" (UID: \"d985e2b5-be7b-4e11-835f-0fbb14859743\") " pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.985020 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p"] Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.985617 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.997710 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c228k\" (UniqueName: \"kubernetes.io/projected/79f465a2-4fb7-470a-83ba-ed5d98e5227b-kube-api-access-c228k\") pod \"nmstate-webhook-8474b5b9d8-l6d5d\" (UID: \"79f465a2-4fb7-470a-83ba-ed5d98e5227b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.997809 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p"] Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.997914 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5zv4j" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.998312 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 03 06:12:15 crc kubenswrapper[4872]: I0203 06:12:15.998720 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.005237 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mgcf\" (UniqueName: \"kubernetes.io/projected/eecf7d0c-c77c-4bb5-9588-24b324a7848f-kube-api-access-7mgcf\") pod \"nmstate-metrics-54757c584b-22pzf\" (UID: \"eecf7d0c-c77c-4bb5-9588-24b324a7848f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.103434 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.145739 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.154380 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngql8\" (UniqueName: \"kubernetes.io/projected/724ba92c-602b-4031-8ad1-7e5b084c4386-kube-api-access-ngql8\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.154446 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/724ba92c-602b-4031-8ad1-7e5b084c4386-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.154565 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/724ba92c-602b-4031-8ad1-7e5b084c4386-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.206470 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-755c765b9d-qfpk5"] Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.218416 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.230110 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-755c765b9d-qfpk5"] Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.237148 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4rnzm" event={"ID":"d985e2b5-be7b-4e11-835f-0fbb14859743","Type":"ContainerStarted","Data":"5e4848db268c62725e17ce5a5878d70dce6ea41402468ad05e9a1c502e9bb7db"} Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.255255 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngql8\" (UniqueName: \"kubernetes.io/projected/724ba92c-602b-4031-8ad1-7e5b084c4386-kube-api-access-ngql8\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.255296 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/724ba92c-602b-4031-8ad1-7e5b084c4386-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.255356 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/724ba92c-602b-4031-8ad1-7e5b084c4386-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.256137 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/724ba92c-602b-4031-8ad1-7e5b084c4386-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: E0203 06:12:16.256389 4872 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 03 06:12:16 crc kubenswrapper[4872]: E0203 06:12:16.256433 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/724ba92c-602b-4031-8ad1-7e5b084c4386-plugin-serving-cert podName:724ba92c-602b-4031-8ad1-7e5b084c4386 nodeName:}" failed. No retries permitted until 2026-02-03 06:12:16.75642088 +0000 UTC m=+707.339112294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/724ba92c-602b-4031-8ad1-7e5b084c4386-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-pnq7p" (UID: "724ba92c-602b-4031-8ad1-7e5b084c4386") : secret "plugin-serving-cert" not found Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.283865 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngql8\" (UniqueName: \"kubernetes.io/projected/724ba92c-602b-4031-8ad1-7e5b084c4386-kube-api-access-ngql8\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.356874 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-oauth-serving-cert\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.356948 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mrck\" (UniqueName: \"kubernetes.io/projected/732936ba-1ca1-4350-ac27-dd039f59ee87-kube-api-access-9mrck\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.356970 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-console-config\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.356989 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-trusted-ca-bundle\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.357095 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-service-ca\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.357116 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/732936ba-1ca1-4350-ac27-dd039f59ee87-console-serving-cert\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.357138 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/732936ba-1ca1-4350-ac27-dd039f59ee87-console-oauth-config\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.377628 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-22pzf"] Feb 03 06:12:16 crc kubenswrapper[4872]: W0203 06:12:16.384417 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeecf7d0c_c77c_4bb5_9588_24b324a7848f.slice/crio-9078791c801ecf636ce09f9fce5e09bfe00e1c0ec29b57ed807eb71f9225c1d3 WatchSource:0}: Error finding container 9078791c801ecf636ce09f9fce5e09bfe00e1c0ec29b57ed807eb71f9225c1d3: Status 404 returned error can't find the container with id 9078791c801ecf636ce09f9fce5e09bfe00e1c0ec29b57ed807eb71f9225c1d3 Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.458981 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/732936ba-1ca1-4350-ac27-dd039f59ee87-console-serving-cert\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.459071 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/732936ba-1ca1-4350-ac27-dd039f59ee87-console-oauth-config\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.459174 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-oauth-serving-cert\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.459212 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mrck\" (UniqueName: \"kubernetes.io/projected/732936ba-1ca1-4350-ac27-dd039f59ee87-kube-api-access-9mrck\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.459235 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-console-config\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.459258 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-trusted-ca-bundle\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.459299 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/79f465a2-4fb7-470a-83ba-ed5d98e5227b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-l6d5d\" (UID: \"79f465a2-4fb7-470a-83ba-ed5d98e5227b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.459342 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-service-ca\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.460738 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-oauth-serving-cert\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.460987 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-service-ca\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.461142 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-console-config\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.461959 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/732936ba-1ca1-4350-ac27-dd039f59ee87-trusted-ca-bundle\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.465885 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/79f465a2-4fb7-470a-83ba-ed5d98e5227b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-l6d5d\" (UID: \"79f465a2-4fb7-470a-83ba-ed5d98e5227b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.466484 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/732936ba-1ca1-4350-ac27-dd039f59ee87-console-serving-cert\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.467385 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/732936ba-1ca1-4350-ac27-dd039f59ee87-console-oauth-config\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.480206 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mrck\" (UniqueName: \"kubernetes.io/projected/732936ba-1ca1-4350-ac27-dd039f59ee87-kube-api-access-9mrck\") pod \"console-755c765b9d-qfpk5\" (UID: \"732936ba-1ca1-4350-ac27-dd039f59ee87\") " pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.548533 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.731313 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.763563 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/724ba92c-602b-4031-8ad1-7e5b084c4386-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.772498 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/724ba92c-602b-4031-8ad1-7e5b084c4386-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-pnq7p\" (UID: \"724ba92c-602b-4031-8ad1-7e5b084c4386\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.845945 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-755c765b9d-qfpk5"] Feb 03 06:12:16 crc kubenswrapper[4872]: W0203 06:12:16.854193 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod732936ba_1ca1_4350_ac27_dd039f59ee87.slice/crio-beabb8ba654976f510feddbd0a8eca771ca76dd3e60dd65c73149f0f16c36ff3 WatchSource:0}: Error finding container beabb8ba654976f510feddbd0a8eca771ca76dd3e60dd65c73149f0f16c36ff3: Status 404 returned error can't find the container with id beabb8ba654976f510feddbd0a8eca771ca76dd3e60dd65c73149f0f16c36ff3 Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.948512 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" Feb 03 06:12:16 crc kubenswrapper[4872]: I0203 06:12:16.953599 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d"] Feb 03 06:12:17 crc kubenswrapper[4872]: I0203 06:12:17.168280 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p"] Feb 03 06:12:17 crc kubenswrapper[4872]: I0203 06:12:17.243358 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" event={"ID":"79f465a2-4fb7-470a-83ba-ed5d98e5227b","Type":"ContainerStarted","Data":"ac2578c33bd68fc52ebdd19f6abf847ddb59e403afc6bc2b73bf00fae56ded47"} Feb 03 06:12:17 crc kubenswrapper[4872]: I0203 06:12:17.252295 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-755c765b9d-qfpk5" event={"ID":"732936ba-1ca1-4350-ac27-dd039f59ee87","Type":"ContainerStarted","Data":"fc35a46541d317e7f9ed94a3b8bec1d6b0f6aa4c20e8564d0db72f0df96e36c7"} Feb 03 06:12:17 crc kubenswrapper[4872]: I0203 06:12:17.252339 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-755c765b9d-qfpk5" event={"ID":"732936ba-1ca1-4350-ac27-dd039f59ee87","Type":"ContainerStarted","Data":"beabb8ba654976f510feddbd0a8eca771ca76dd3e60dd65c73149f0f16c36ff3"} Feb 03 06:12:17 crc kubenswrapper[4872]: I0203 06:12:17.256561 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" event={"ID":"eecf7d0c-c77c-4bb5-9588-24b324a7848f","Type":"ContainerStarted","Data":"9078791c801ecf636ce09f9fce5e09bfe00e1c0ec29b57ed807eb71f9225c1d3"} Feb 03 06:12:17 crc kubenswrapper[4872]: I0203 06:12:17.257495 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" event={"ID":"724ba92c-602b-4031-8ad1-7e5b084c4386","Type":"ContainerStarted","Data":"52cb95f76207bfd5be3347bcc1856f6c1eab7506295d2e8d7ad7e537804f300c"} Feb 03 06:12:20 crc kubenswrapper[4872]: I0203 06:12:20.175955 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-755c765b9d-qfpk5" podStartSLOduration=4.175937481 podStartE2EDuration="4.175937481s" podCreationTimestamp="2026-02-03 06:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:12:17.272943869 +0000 UTC m=+707.855635283" watchObservedRunningTime="2026-02-03 06:12:20.175937481 +0000 UTC m=+710.758628905" Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.281993 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" event={"ID":"eecf7d0c-c77c-4bb5-9588-24b324a7848f","Type":"ContainerStarted","Data":"e911baf176233dc679c15430faeccd5bf45e627dab6fcce03cb97596a9259823"} Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.283597 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" event={"ID":"724ba92c-602b-4031-8ad1-7e5b084c4386","Type":"ContainerStarted","Data":"6c52843115625861986611a481ab5754bc8ef1fabb16714a813825e004562d2c"} Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.286606 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4rnzm" event={"ID":"d985e2b5-be7b-4e11-835f-0fbb14859743","Type":"ContainerStarted","Data":"c67d4022ff190e171188980032b28b0c43aba73bb2fe609189213ca08ddab023"} Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.286729 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.287746 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" event={"ID":"79f465a2-4fb7-470a-83ba-ed5d98e5227b","Type":"ContainerStarted","Data":"595dddcaff7a22dca8775ed20d51924d888cbeec444b055cfc9f9e7bc5507f32"} Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.288071 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.300613 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pnq7p" podStartSLOduration=3.387864583 podStartE2EDuration="6.300592873s" podCreationTimestamp="2026-02-03 06:12:15 +0000 UTC" firstStartedPulling="2026-02-03 06:12:17.17951654 +0000 UTC m=+707.762207954" lastFinishedPulling="2026-02-03 06:12:20.09224479 +0000 UTC m=+710.674936244" observedRunningTime="2026-02-03 06:12:21.297602223 +0000 UTC m=+711.880293677" watchObservedRunningTime="2026-02-03 06:12:21.300592873 +0000 UTC m=+711.883284297" Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.334726 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4rnzm" podStartSLOduration=2.3606247639999998 podStartE2EDuration="6.334707052s" podCreationTimestamp="2026-02-03 06:12:15 +0000 UTC" firstStartedPulling="2026-02-03 06:12:16.184041665 +0000 UTC m=+706.766733079" lastFinishedPulling="2026-02-03 06:12:20.158123943 +0000 UTC m=+710.740815367" observedRunningTime="2026-02-03 06:12:21.329096841 +0000 UTC m=+711.911788245" watchObservedRunningTime="2026-02-03 06:12:21.334707052 +0000 UTC m=+711.917398466" Feb 03 06:12:21 crc kubenswrapper[4872]: I0203 06:12:21.352902 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" podStartSLOduration=3.235168585 podStartE2EDuration="6.352877248s" podCreationTimestamp="2026-02-03 06:12:15 +0000 UTC" firstStartedPulling="2026-02-03 06:12:16.971795882 +0000 UTC m=+707.554487296" lastFinishedPulling="2026-02-03 06:12:20.089504515 +0000 UTC m=+710.672195959" observedRunningTime="2026-02-03 06:12:21.345901824 +0000 UTC m=+711.928593258" watchObservedRunningTime="2026-02-03 06:12:21.352877248 +0000 UTC m=+711.935568672" Feb 03 06:12:23 crc kubenswrapper[4872]: I0203 06:12:23.310383 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" event={"ID":"eecf7d0c-c77c-4bb5-9588-24b324a7848f","Type":"ContainerStarted","Data":"34b8ac32ce7b1c18a4e1a99258038f32000dd85d1829912f4e466af15a921388"} Feb 03 06:12:23 crc kubenswrapper[4872]: I0203 06:12:23.353171 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-22pzf" podStartSLOduration=2.221055125 podStartE2EDuration="8.353137378s" podCreationTimestamp="2026-02-03 06:12:15 +0000 UTC" firstStartedPulling="2026-02-03 06:12:16.387047522 +0000 UTC m=+706.969738936" lastFinishedPulling="2026-02-03 06:12:22.519129775 +0000 UTC m=+713.101821189" observedRunningTime="2026-02-03 06:12:23.344544445 +0000 UTC m=+713.927235899" watchObservedRunningTime="2026-02-03 06:12:23.353137378 +0000 UTC m=+713.935828842" Feb 03 06:12:26 crc kubenswrapper[4872]: I0203 06:12:26.167179 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4rnzm" Feb 03 06:12:26 crc kubenswrapper[4872]: I0203 06:12:26.548949 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:26 crc kubenswrapper[4872]: I0203 06:12:26.549044 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:26 crc kubenswrapper[4872]: I0203 06:12:26.558633 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:27 crc kubenswrapper[4872]: I0203 06:12:27.345656 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-755c765b9d-qfpk5" Feb 03 06:12:27 crc kubenswrapper[4872]: I0203 06:12:27.432332 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rlbdg"] Feb 03 06:12:31 crc kubenswrapper[4872]: I0203 06:12:31.271540 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:12:31 crc kubenswrapper[4872]: I0203 06:12:31.272550 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:12:36 crc kubenswrapper[4872]: I0203 06:12:36.742283 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-l6d5d" Feb 03 06:12:50 crc kubenswrapper[4872]: I0203 06:12:50.964465 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll"] Feb 03 06:12:50 crc kubenswrapper[4872]: I0203 06:12:50.966768 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:50 crc kubenswrapper[4872]: I0203 06:12:50.969633 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 06:12:50 crc kubenswrapper[4872]: I0203 06:12:50.981228 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll"] Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.076144 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.076383 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.076514 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzc7\" (UniqueName: \"kubernetes.io/projected/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-kube-api-access-dbzc7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.178415 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzc7\" (UniqueName: \"kubernetes.io/projected/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-kube-api-access-dbzc7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.178625 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.178665 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.179439 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.180072 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.204806 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzc7\" (UniqueName: \"kubernetes.io/projected/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-kube-api-access-dbzc7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.339256 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:51 crc kubenswrapper[4872]: I0203 06:12:51.812683 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll"] Feb 03 06:12:52 crc kubenswrapper[4872]: I0203 06:12:52.495399 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rlbdg" podUID="0180e076-5e8c-4190-bd67-569e2f915913" containerName="console" containerID="cri-o://fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb" gracePeriod=15 Feb 03 06:12:52 crc kubenswrapper[4872]: I0203 06:12:52.520067 4872 generic.go:334] "Generic (PLEG): container finished" podID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerID="2aeb8c42636c7ee6336f1b26f5c97fd43d458b72a55753ff537687e1151982dc" exitCode=0 Feb 03 06:12:52 crc kubenswrapper[4872]: I0203 06:12:52.520128 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" event={"ID":"9a5f0a26-8be9-4f0b-a612-5416c27be8d0","Type":"ContainerDied","Data":"2aeb8c42636c7ee6336f1b26f5c97fd43d458b72a55753ff537687e1151982dc"} Feb 03 06:12:52 crc kubenswrapper[4872]: I0203 06:12:52.520167 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" event={"ID":"9a5f0a26-8be9-4f0b-a612-5416c27be8d0","Type":"ContainerStarted","Data":"8891a2103216b69cbd2f9db82aabfb8b3608d713613d6d95b40e496d85193317"} Feb 03 06:12:52 crc kubenswrapper[4872]: I0203 06:12:52.900930 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rlbdg_0180e076-5e8c-4190-bd67-569e2f915913/console/0.log" Feb 03 06:12:52 crc kubenswrapper[4872]: I0203 06:12:52.901008 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.008449 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7697\" (UniqueName: \"kubernetes.io/projected/0180e076-5e8c-4190-bd67-569e2f915913-kube-api-access-z7697\") pod \"0180e076-5e8c-4190-bd67-569e2f915913\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.008528 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-console-config\") pod \"0180e076-5e8c-4190-bd67-569e2f915913\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.008663 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-serving-cert\") pod \"0180e076-5e8c-4190-bd67-569e2f915913\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.008727 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-trusted-ca-bundle\") pod \"0180e076-5e8c-4190-bd67-569e2f915913\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.008755 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-oauth-serving-cert\") pod \"0180e076-5e8c-4190-bd67-569e2f915913\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.008776 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-service-ca\") pod \"0180e076-5e8c-4190-bd67-569e2f915913\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.008819 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-oauth-config\") pod \"0180e076-5e8c-4190-bd67-569e2f915913\" (UID: \"0180e076-5e8c-4190-bd67-569e2f915913\") " Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.009890 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0180e076-5e8c-4190-bd67-569e2f915913" (UID: "0180e076-5e8c-4190-bd67-569e2f915913"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.009961 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-console-config" (OuterVolumeSpecName: "console-config") pod "0180e076-5e8c-4190-bd67-569e2f915913" (UID: "0180e076-5e8c-4190-bd67-569e2f915913"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.010642 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0180e076-5e8c-4190-bd67-569e2f915913" (UID: "0180e076-5e8c-4190-bd67-569e2f915913"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.010653 4872 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.010757 4872 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.011075 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-service-ca" (OuterVolumeSpecName: "service-ca") pod "0180e076-5e8c-4190-bd67-569e2f915913" (UID: "0180e076-5e8c-4190-bd67-569e2f915913"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.015773 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0180e076-5e8c-4190-bd67-569e2f915913" (UID: "0180e076-5e8c-4190-bd67-569e2f915913"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.016863 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0180e076-5e8c-4190-bd67-569e2f915913-kube-api-access-z7697" (OuterVolumeSpecName: "kube-api-access-z7697") pod "0180e076-5e8c-4190-bd67-569e2f915913" (UID: "0180e076-5e8c-4190-bd67-569e2f915913"). InnerVolumeSpecName "kube-api-access-z7697". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.017552 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0180e076-5e8c-4190-bd67-569e2f915913" (UID: "0180e076-5e8c-4190-bd67-569e2f915913"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.112729 4872 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.112780 4872 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.112802 4872 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0180e076-5e8c-4190-bd67-569e2f915913-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.112820 4872 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0180e076-5e8c-4190-bd67-569e2f915913-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.112839 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7697\" (UniqueName: \"kubernetes.io/projected/0180e076-5e8c-4190-bd67-569e2f915913-kube-api-access-z7697\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.526524 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rlbdg_0180e076-5e8c-4190-bd67-569e2f915913/console/0.log" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.526578 4872 generic.go:334] "Generic (PLEG): container finished" podID="0180e076-5e8c-4190-bd67-569e2f915913" containerID="fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb" exitCode=2 Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.526610 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rlbdg" event={"ID":"0180e076-5e8c-4190-bd67-569e2f915913","Type":"ContainerDied","Data":"fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb"} Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.526639 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rlbdg" event={"ID":"0180e076-5e8c-4190-bd67-569e2f915913","Type":"ContainerDied","Data":"b59f668d6b3e1ca3b5bf498ed33fac868ebd04e1d4770219c4bf2f33b5134be1"} Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.526660 4872 scope.go:117] "RemoveContainer" containerID="fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.526665 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rlbdg" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.543944 4872 scope.go:117] "RemoveContainer" containerID="fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb" Feb 03 06:12:53 crc kubenswrapper[4872]: E0203 06:12:53.544365 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb\": container with ID starting with fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb not found: ID does not exist" containerID="fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.544390 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb"} err="failed to get container status \"fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb\": rpc error: code = NotFound desc = could not find container \"fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb\": container with ID starting with fe12c3a19a90ed58785f83d496c13cf7de1aa122fbf64c3234c2b27526a24ccb not found: ID does not exist" Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.571613 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rlbdg"] Feb 03 06:12:53 crc kubenswrapper[4872]: I0203 06:12:53.576603 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rlbdg"] Feb 03 06:12:54 crc kubenswrapper[4872]: I0203 06:12:54.134289 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0180e076-5e8c-4190-bd67-569e2f915913" path="/var/lib/kubelet/pods/0180e076-5e8c-4190-bd67-569e2f915913/volumes" Feb 03 06:12:54 crc kubenswrapper[4872]: I0203 06:12:54.538957 4872 generic.go:334] "Generic (PLEG): container finished" podID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerID="d1ba15cafaf770eff14fe72582eb0dae34edf1ff02e116e533e36cf1a6433766" exitCode=0 Feb 03 06:12:54 crc kubenswrapper[4872]: I0203 06:12:54.539150 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" event={"ID":"9a5f0a26-8be9-4f0b-a612-5416c27be8d0","Type":"ContainerDied","Data":"d1ba15cafaf770eff14fe72582eb0dae34edf1ff02e116e533e36cf1a6433766"} Feb 03 06:12:55 crc kubenswrapper[4872]: I0203 06:12:55.555754 4872 generic.go:334] "Generic (PLEG): container finished" podID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerID="380ed5471691f1d1f2a3b5d61344649deb4ab47ac4b3476dab46dede7c9a870d" exitCode=0 Feb 03 06:12:55 crc kubenswrapper[4872]: I0203 06:12:55.555824 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" event={"ID":"9a5f0a26-8be9-4f0b-a612-5416c27be8d0","Type":"ContainerDied","Data":"380ed5471691f1d1f2a3b5d61344649deb4ab47ac4b3476dab46dede7c9a870d"} Feb 03 06:12:56 crc kubenswrapper[4872]: I0203 06:12:56.868420 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:56 crc kubenswrapper[4872]: I0203 06:12:56.967841 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-util\") pod \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " Feb 03 06:12:56 crc kubenswrapper[4872]: I0203 06:12:56.967970 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-bundle\") pod \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " Feb 03 06:12:56 crc kubenswrapper[4872]: I0203 06:12:56.968028 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbzc7\" (UniqueName: \"kubernetes.io/projected/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-kube-api-access-dbzc7\") pod \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\" (UID: \"9a5f0a26-8be9-4f0b-a612-5416c27be8d0\") " Feb 03 06:12:56 crc kubenswrapper[4872]: I0203 06:12:56.971117 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-bundle" (OuterVolumeSpecName: "bundle") pod "9a5f0a26-8be9-4f0b-a612-5416c27be8d0" (UID: "9a5f0a26-8be9-4f0b-a612-5416c27be8d0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:12:56 crc kubenswrapper[4872]: I0203 06:12:56.976843 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-kube-api-access-dbzc7" (OuterVolumeSpecName: "kube-api-access-dbzc7") pod "9a5f0a26-8be9-4f0b-a612-5416c27be8d0" (UID: "9a5f0a26-8be9-4f0b-a612-5416c27be8d0"). InnerVolumeSpecName "kube-api-access-dbzc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:12:56 crc kubenswrapper[4872]: I0203 06:12:56.988477 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-util" (OuterVolumeSpecName: "util") pod "9a5f0a26-8be9-4f0b-a612-5416c27be8d0" (UID: "9a5f0a26-8be9-4f0b-a612-5416c27be8d0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:12:57 crc kubenswrapper[4872]: I0203 06:12:57.069725 4872 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:57 crc kubenswrapper[4872]: I0203 06:12:57.069959 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbzc7\" (UniqueName: \"kubernetes.io/projected/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-kube-api-access-dbzc7\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:57 crc kubenswrapper[4872]: I0203 06:12:57.070065 4872 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a5f0a26-8be9-4f0b-a612-5416c27be8d0-util\") on node \"crc\" DevicePath \"\"" Feb 03 06:12:57 crc kubenswrapper[4872]: I0203 06:12:57.572762 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" event={"ID":"9a5f0a26-8be9-4f0b-a612-5416c27be8d0","Type":"ContainerDied","Data":"8891a2103216b69cbd2f9db82aabfb8b3608d713613d6d95b40e496d85193317"} Feb 03 06:12:57 crc kubenswrapper[4872]: I0203 06:12:57.573031 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8891a2103216b69cbd2f9db82aabfb8b3608d713613d6d95b40e496d85193317" Feb 03 06:12:57 crc kubenswrapper[4872]: I0203 06:12:57.572903 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll" Feb 03 06:12:59 crc kubenswrapper[4872]: E0203 06:12:59.862474 4872 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.271369 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.271442 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.271488 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.272028 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e60bd766edb6c61ff425105b834c7b74cd5da1d6ff6ffe6f2e66cc4bbe2ff323"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.272085 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://e60bd766edb6c61ff425105b834c7b74cd5da1d6ff6ffe6f2e66cc4bbe2ff323" gracePeriod=600 Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.604336 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="e60bd766edb6c61ff425105b834c7b74cd5da1d6ff6ffe6f2e66cc4bbe2ff323" exitCode=0 Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.604413 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"e60bd766edb6c61ff425105b834c7b74cd5da1d6ff6ffe6f2e66cc4bbe2ff323"} Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.604645 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"8daff91be8f2ae554bbad9124562735aa9d055c764b0bc522db54af803ed7992"} Feb 03 06:13:01 crc kubenswrapper[4872]: I0203 06:13:01.604666 4872 scope.go:117] "RemoveContainer" containerID="56251d9aee6a397b40abf3eed474b8f789488d08ca0bb0ba1783f4b2052f93f8" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.958905 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz"] Feb 03 06:13:05 crc kubenswrapper[4872]: E0203 06:13:05.959647 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerName="extract" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.959657 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerName="extract" Feb 03 06:13:05 crc kubenswrapper[4872]: E0203 06:13:05.959669 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerName="util" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.959674 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerName="util" Feb 03 06:13:05 crc kubenswrapper[4872]: E0203 06:13:05.959705 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0180e076-5e8c-4190-bd67-569e2f915913" containerName="console" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.959712 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0180e076-5e8c-4190-bd67-569e2f915913" containerName="console" Feb 03 06:13:05 crc kubenswrapper[4872]: E0203 06:13:05.959719 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerName="pull" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.959724 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerName="pull" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.959844 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5f0a26-8be9-4f0b-a612-5416c27be8d0" containerName="extract" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.959855 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0180e076-5e8c-4190-bd67-569e2f915913" containerName="console" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.960196 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.963517 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.963828 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.964654 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-cvnqd" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.964750 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.966373 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 03 06:13:05 crc kubenswrapper[4872]: I0203 06:13:05.973767 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz"] Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.089434 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80b94f6b-5ca4-4650-b58f-df22137e4c04-webhook-cert\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.089527 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80b94f6b-5ca4-4650-b58f-df22137e4c04-apiservice-cert\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.089549 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2vp\" (UniqueName: \"kubernetes.io/projected/80b94f6b-5ca4-4650-b58f-df22137e4c04-kube-api-access-xj2vp\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.191123 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2vp\" (UniqueName: \"kubernetes.io/projected/80b94f6b-5ca4-4650-b58f-df22137e4c04-kube-api-access-xj2vp\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.191162 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80b94f6b-5ca4-4650-b58f-df22137e4c04-apiservice-cert\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.191209 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80b94f6b-5ca4-4650-b58f-df22137e4c04-webhook-cert\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.197416 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80b94f6b-5ca4-4650-b58f-df22137e4c04-webhook-cert\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.197422 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80b94f6b-5ca4-4650-b58f-df22137e4c04-apiservice-cert\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.214351 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2vp\" (UniqueName: \"kubernetes.io/projected/80b94f6b-5ca4-4650-b58f-df22137e4c04-kube-api-access-xj2vp\") pod \"metallb-operator-controller-manager-fdc65c4dc-rczwz\" (UID: \"80b94f6b-5ca4-4650-b58f-df22137e4c04\") " pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.271561 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89"] Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.272181 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.272649 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.280118 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.288136 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.288270 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dzvxl" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.360740 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89"] Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.402229 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-webhook-cert\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.402274 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-apiservice-cert\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.402356 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnmbs\" (UniqueName: \"kubernetes.io/projected/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-kube-api-access-gnmbs\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.503448 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnmbs\" (UniqueName: \"kubernetes.io/projected/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-kube-api-access-gnmbs\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.503807 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-webhook-cert\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.503950 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-apiservice-cert\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.515511 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-webhook-cert\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.525571 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-apiservice-cert\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.525719 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnmbs\" (UniqueName: \"kubernetes.io/projected/c81e20ac-e8e8-4f44-ba9d-f52c5c30849b-kube-api-access-gnmbs\") pod \"metallb-operator-webhook-server-74df9ff78b-5dz89\" (UID: \"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b\") " pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.623430 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.650842 4872 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 06:13:06 crc kubenswrapper[4872]: I0203 06:13:06.834795 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz"] Feb 03 06:13:06 crc kubenswrapper[4872]: W0203 06:13:06.843781 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b94f6b_5ca4_4650_b58f_df22137e4c04.slice/crio-9de9e45f1bbde109b5e57c79ece161f56f43398e03729624e6c811820f472360 WatchSource:0}: Error finding container 9de9e45f1bbde109b5e57c79ece161f56f43398e03729624e6c811820f472360: Status 404 returned error can't find the container with id 9de9e45f1bbde109b5e57c79ece161f56f43398e03729624e6c811820f472360 Feb 03 06:13:07 crc kubenswrapper[4872]: I0203 06:13:07.109177 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89"] Feb 03 06:13:07 crc kubenswrapper[4872]: I0203 06:13:07.648020 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" event={"ID":"80b94f6b-5ca4-4650-b58f-df22137e4c04","Type":"ContainerStarted","Data":"9de9e45f1bbde109b5e57c79ece161f56f43398e03729624e6c811820f472360"} Feb 03 06:13:07 crc kubenswrapper[4872]: I0203 06:13:07.649173 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" event={"ID":"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b","Type":"ContainerStarted","Data":"baf0b4e67a6f73c706a7c47b197b4160c63c6bfb78a8c182ff3abe41f69270a1"} Feb 03 06:13:11 crc kubenswrapper[4872]: I0203 06:13:11.675753 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" event={"ID":"80b94f6b-5ca4-4650-b58f-df22137e4c04","Type":"ContainerStarted","Data":"e487a04b7c9ef9f3933fd6eaebc350cfcc15cfc60ed0cd24ad55604b52ca30b0"} Feb 03 06:13:11 crc kubenswrapper[4872]: I0203 06:13:11.676271 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:11 crc kubenswrapper[4872]: I0203 06:13:11.705439 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" podStartSLOduration=3.552809432 podStartE2EDuration="6.705423117s" podCreationTimestamp="2026-02-03 06:13:05 +0000 UTC" firstStartedPulling="2026-02-03 06:13:06.85011181 +0000 UTC m=+757.432803224" lastFinishedPulling="2026-02-03 06:13:10.002725495 +0000 UTC m=+760.585416909" observedRunningTime="2026-02-03 06:13:11.703296956 +0000 UTC m=+762.285988400" watchObservedRunningTime="2026-02-03 06:13:11.705423117 +0000 UTC m=+762.288114531" Feb 03 06:13:12 crc kubenswrapper[4872]: I0203 06:13:12.684805 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" event={"ID":"c81e20ac-e8e8-4f44-ba9d-f52c5c30849b","Type":"ContainerStarted","Data":"ef4be3e3fdfd491e2458e4376f30bc987a132ebd79893956923616500d00a99b"} Feb 03 06:13:12 crc kubenswrapper[4872]: I0203 06:13:12.685095 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:12 crc kubenswrapper[4872]: I0203 06:13:12.712272 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" podStartSLOduration=2.277174443 podStartE2EDuration="6.712250749s" podCreationTimestamp="2026-02-03 06:13:06 +0000 UTC" firstStartedPulling="2026-02-03 06:13:07.145674768 +0000 UTC m=+757.728366182" lastFinishedPulling="2026-02-03 06:13:11.580751074 +0000 UTC m=+762.163442488" observedRunningTime="2026-02-03 06:13:12.705814886 +0000 UTC m=+763.288506310" watchObservedRunningTime="2026-02-03 06:13:12.712250749 +0000 UTC m=+763.294942183" Feb 03 06:13:26 crc kubenswrapper[4872]: I0203 06:13:26.628489 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-74df9ff78b-5dz89" Feb 03 06:13:46 crc kubenswrapper[4872]: I0203 06:13:46.275070 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-fdc65c4dc-rczwz" Feb 03 06:13:46 crc kubenswrapper[4872]: I0203 06:13:46.999386 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qrbzc"] Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.002241 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.003949 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-9vlvn" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.022852 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.027097 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.040027 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7"] Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.040832 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.053876 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.070926 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7"] Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095108 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-startup\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095174 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69417317-0a8d-4c10-8f4c-fe8e387b678e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-grdt7\" (UID: \"69417317-0a8d-4c10-8f4c-fe8e387b678e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095193 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pw5n\" (UniqueName: \"kubernetes.io/projected/69417317-0a8d-4c10-8f4c-fe8e387b678e-kube-api-access-5pw5n\") pod \"frr-k8s-webhook-server-7df86c4f6c-grdt7\" (UID: \"69417317-0a8d-4c10-8f4c-fe8e387b678e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095218 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-sockets\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095237 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics-certs\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095257 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-reloader\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095276 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095295 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-conf\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.095308 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlhcz\" (UniqueName: \"kubernetes.io/projected/7856ee3e-22ea-4e77-b4aa-69893fc7e281-kube-api-access-tlhcz\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.155289 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4qtc9"] Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.156131 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.160808 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.160851 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.161062 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-8cl4m" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.161304 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.193025 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-tb2wg"] Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.193829 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.198339 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.198968 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-startup\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.198997 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flv9c\" (UniqueName: \"kubernetes.io/projected/fb6f2971-eff5-4e61-8584-073de69e2e5f-kube-api-access-flv9c\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199024 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-metrics-certs\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199044 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h2qw\" (UniqueName: \"kubernetes.io/projected/21971f1c-c210-4df4-942c-4637ecdbcd75-kube-api-access-6h2qw\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199090 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-cert\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199110 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69417317-0a8d-4c10-8f4c-fe8e387b678e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-grdt7\" (UID: \"69417317-0a8d-4c10-8f4c-fe8e387b678e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199125 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pw5n\" (UniqueName: \"kubernetes.io/projected/69417317-0a8d-4c10-8f4c-fe8e387b678e-kube-api-access-5pw5n\") pod \"frr-k8s-webhook-server-7df86c4f6c-grdt7\" (UID: \"69417317-0a8d-4c10-8f4c-fe8e387b678e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199141 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-sockets\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199159 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics-certs\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199181 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-reloader\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199198 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-metrics-certs\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199216 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199238 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlhcz\" (UniqueName: \"kubernetes.io/projected/7856ee3e-22ea-4e77-b4aa-69893fc7e281-kube-api-access-tlhcz\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199253 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-conf\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199270 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21971f1c-c210-4df4-942c-4637ecdbcd75-metallb-excludel2\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199298 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.199771 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-sockets\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.200153 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-startup\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.200227 4872 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.200267 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics-certs podName:7856ee3e-22ea-4e77-b4aa-69893fc7e281 nodeName:}" failed. No retries permitted until 2026-02-03 06:13:47.700251578 +0000 UTC m=+798.282942992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics-certs") pod "frr-k8s-qrbzc" (UID: "7856ee3e-22ea-4e77-b4aa-69893fc7e281") : secret "frr-k8s-certs-secret" not found Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.200629 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-reloader\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.200835 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.201145 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7856ee3e-22ea-4e77-b4aa-69893fc7e281-frr-conf\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.209860 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/69417317-0a8d-4c10-8f4c-fe8e387b678e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-grdt7\" (UID: \"69417317-0a8d-4c10-8f4c-fe8e387b678e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.216854 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-tb2wg"] Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.245328 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pw5n\" (UniqueName: \"kubernetes.io/projected/69417317-0a8d-4c10-8f4c-fe8e387b678e-kube-api-access-5pw5n\") pod \"frr-k8s-webhook-server-7df86c4f6c-grdt7\" (UID: \"69417317-0a8d-4c10-8f4c-fe8e387b678e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.265557 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlhcz\" (UniqueName: \"kubernetes.io/projected/7856ee3e-22ea-4e77-b4aa-69893fc7e281-kube-api-access-tlhcz\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.300052 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-metrics-certs\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.300166 4872 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.300219 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-metrics-certs podName:fb6f2971-eff5-4e61-8584-073de69e2e5f nodeName:}" failed. No retries permitted until 2026-02-03 06:13:47.800202348 +0000 UTC m=+798.382893762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-metrics-certs") pod "controller-6968d8fdc4-tb2wg" (UID: "fb6f2971-eff5-4e61-8584-073de69e2e5f") : secret "controller-certs-secret" not found Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.300424 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h2qw\" (UniqueName: \"kubernetes.io/projected/21971f1c-c210-4df4-942c-4637ecdbcd75-kube-api-access-6h2qw\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.300458 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-cert\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.300911 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-metrics-certs\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.300965 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21971f1c-c210-4df4-942c-4637ecdbcd75-metallb-excludel2\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.301014 4872 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.301084 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-metrics-certs podName:21971f1c-c210-4df4-942c-4637ecdbcd75 nodeName:}" failed. No retries permitted until 2026-02-03 06:13:47.801067889 +0000 UTC m=+798.383759303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-metrics-certs") pod "speaker-4qtc9" (UID: "21971f1c-c210-4df4-942c-4637ecdbcd75") : secret "speaker-certs-secret" not found Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.301157 4872 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.301259 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist podName:21971f1c-c210-4df4-942c-4637ecdbcd75 nodeName:}" failed. No retries permitted until 2026-02-03 06:13:47.801238013 +0000 UTC m=+798.383929487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist") pod "speaker-4qtc9" (UID: "21971f1c-c210-4df4-942c-4637ecdbcd75") : secret "metallb-memberlist" not found Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.301549 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/21971f1c-c210-4df4-942c-4637ecdbcd75-metallb-excludel2\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.301001 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.301618 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flv9c\" (UniqueName: \"kubernetes.io/projected/fb6f2971-eff5-4e61-8584-073de69e2e5f-kube-api-access-flv9c\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.310521 4872 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.316019 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-cert\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.328248 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flv9c\" (UniqueName: \"kubernetes.io/projected/fb6f2971-eff5-4e61-8584-073de69e2e5f-kube-api-access-flv9c\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.345454 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h2qw\" (UniqueName: \"kubernetes.io/projected/21971f1c-c210-4df4-942c-4637ecdbcd75-kube-api-access-6h2qw\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.355547 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.705671 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics-certs\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.710612 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7856ee3e-22ea-4e77-b4aa-69893fc7e281-metrics-certs\") pod \"frr-k8s-qrbzc\" (UID: \"7856ee3e-22ea-4e77-b4aa-69893fc7e281\") " pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.807176 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.807229 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-metrics-certs\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.807288 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-metrics-certs\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.807368 4872 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 06:13:47 crc kubenswrapper[4872]: E0203 06:13:47.807432 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist podName:21971f1c-c210-4df4-942c-4637ecdbcd75 nodeName:}" failed. No retries permitted until 2026-02-03 06:13:48.807412813 +0000 UTC m=+799.390104227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist") pod "speaker-4qtc9" (UID: "21971f1c-c210-4df4-942c-4637ecdbcd75") : secret "metallb-memberlist" not found Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.811799 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb6f2971-eff5-4e61-8584-073de69e2e5f-metrics-certs\") pod \"controller-6968d8fdc4-tb2wg\" (UID: \"fb6f2971-eff5-4e61-8584-073de69e2e5f\") " pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.811936 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-metrics-certs\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.832898 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7"] Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.838714 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.912241 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" event={"ID":"69417317-0a8d-4c10-8f4c-fe8e387b678e","Type":"ContainerStarted","Data":"ddda62ae7811c7db53865bbedecc471e40bd43e14d5ada075423fe39e9bbea6c"} Feb 03 06:13:47 crc kubenswrapper[4872]: I0203 06:13:47.920193 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.292511 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-tb2wg"] Feb 03 06:13:48 crc kubenswrapper[4872]: W0203 06:13:48.298067 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb6f2971_eff5_4e61_8584_073de69e2e5f.slice/crio-4aed9dc5483f37bb41dfc1d3f06d68583566541314f55656959b349d10e7c3d5 WatchSource:0}: Error finding container 4aed9dc5483f37bb41dfc1d3f06d68583566541314f55656959b349d10e7c3d5: Status 404 returned error can't find the container with id 4aed9dc5483f37bb41dfc1d3f06d68583566541314f55656959b349d10e7c3d5 Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.822630 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.828433 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/21971f1c-c210-4df4-942c-4637ecdbcd75-memberlist\") pod \"speaker-4qtc9\" (UID: \"21971f1c-c210-4df4-942c-4637ecdbcd75\") " pod="metallb-system/speaker-4qtc9" Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.919132 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tb2wg" event={"ID":"fb6f2971-eff5-4e61-8584-073de69e2e5f","Type":"ContainerStarted","Data":"0345a8edda1dc9367f2e6b06f15a6b4806adfe3b494ecb2aaf27bf916f89b471"} Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.919174 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tb2wg" event={"ID":"fb6f2971-eff5-4e61-8584-073de69e2e5f","Type":"ContainerStarted","Data":"a966a1aef415a8b6ea7486120d36afd420062f23ffcde3f7f4c6b19a9cac03f7"} Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.919184 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-tb2wg" event={"ID":"fb6f2971-eff5-4e61-8584-073de69e2e5f","Type":"ContainerStarted","Data":"4aed9dc5483f37bb41dfc1d3f06d68583566541314f55656959b349d10e7c3d5"} Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.919587 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.922156 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerStarted","Data":"a6af8689cb64daa1807c8d689dd8b823551062f72a4e8eb49c52ed0968d8e53c"} Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.934853 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-tb2wg" podStartSLOduration=1.9348298449999999 podStartE2EDuration="1.934829845s" podCreationTimestamp="2026-02-03 06:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:13:48.932108791 +0000 UTC m=+799.514800225" watchObservedRunningTime="2026-02-03 06:13:48.934829845 +0000 UTC m=+799.517521299" Feb 03 06:13:48 crc kubenswrapper[4872]: I0203 06:13:48.973304 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4qtc9" Feb 03 06:13:49 crc kubenswrapper[4872]: W0203 06:13:49.008220 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21971f1c_c210_4df4_942c_4637ecdbcd75.slice/crio-a68b2503d3bb3ac843a11a1677fe127b4e74297860e7ec09a0cec049d6e6d124 WatchSource:0}: Error finding container a68b2503d3bb3ac843a11a1677fe127b4e74297860e7ec09a0cec049d6e6d124: Status 404 returned error can't find the container with id a68b2503d3bb3ac843a11a1677fe127b4e74297860e7ec09a0cec049d6e6d124 Feb 03 06:13:49 crc kubenswrapper[4872]: I0203 06:13:49.936168 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4qtc9" event={"ID":"21971f1c-c210-4df4-942c-4637ecdbcd75","Type":"ContainerStarted","Data":"d26bf229f76849e289ebb3a4fd22f69734b437645de2fc97baf92bdb830862db"} Feb 03 06:13:49 crc kubenswrapper[4872]: I0203 06:13:49.936229 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4qtc9" event={"ID":"21971f1c-c210-4df4-942c-4637ecdbcd75","Type":"ContainerStarted","Data":"00c2d9dfbbfaf23f06f57c3ab06f5283d21bae8ec9be0517139f849df2b0c061"} Feb 03 06:13:49 crc kubenswrapper[4872]: I0203 06:13:49.936243 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4qtc9" event={"ID":"21971f1c-c210-4df4-942c-4637ecdbcd75","Type":"ContainerStarted","Data":"a68b2503d3bb3ac843a11a1677fe127b4e74297860e7ec09a0cec049d6e6d124"} Feb 03 06:13:49 crc kubenswrapper[4872]: I0203 06:13:49.936873 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4qtc9" Feb 03 06:13:49 crc kubenswrapper[4872]: I0203 06:13:49.954779 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4qtc9" podStartSLOduration=2.954763958 podStartE2EDuration="2.954763958s" podCreationTimestamp="2026-02-03 06:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:13:49.951961101 +0000 UTC m=+800.534652525" watchObservedRunningTime="2026-02-03 06:13:49.954763958 +0000 UTC m=+800.537455372" Feb 03 06:13:55 crc kubenswrapper[4872]: I0203 06:13:55.996963 4872 generic.go:334] "Generic (PLEG): container finished" podID="7856ee3e-22ea-4e77-b4aa-69893fc7e281" containerID="85bad6bcc845f55adc45888c9cb77ba2cf187d473c807c4dd71f864147c5e1f6" exitCode=0 Feb 03 06:13:55 crc kubenswrapper[4872]: I0203 06:13:55.997023 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerDied","Data":"85bad6bcc845f55adc45888c9cb77ba2cf187d473c807c4dd71f864147c5e1f6"} Feb 03 06:13:55 crc kubenswrapper[4872]: I0203 06:13:55.999780 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" event={"ID":"69417317-0a8d-4c10-8f4c-fe8e387b678e","Type":"ContainerStarted","Data":"32bad270e607d0722ab415d2961365ffd608bc732fd0b0da9078df46b9fff9b6"} Feb 03 06:13:56 crc kubenswrapper[4872]: I0203 06:13:55.999916 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:13:56 crc kubenswrapper[4872]: I0203 06:13:56.051575 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" podStartSLOduration=2.216007841 podStartE2EDuration="10.051560149s" podCreationTimestamp="2026-02-03 06:13:46 +0000 UTC" firstStartedPulling="2026-02-03 06:13:47.845554115 +0000 UTC m=+798.428245529" lastFinishedPulling="2026-02-03 06:13:55.681106403 +0000 UTC m=+806.263797837" observedRunningTime="2026-02-03 06:13:56.051120228 +0000 UTC m=+806.633811652" watchObservedRunningTime="2026-02-03 06:13:56.051560149 +0000 UTC m=+806.634251563" Feb 03 06:13:57 crc kubenswrapper[4872]: I0203 06:13:57.008670 4872 generic.go:334] "Generic (PLEG): container finished" podID="7856ee3e-22ea-4e77-b4aa-69893fc7e281" containerID="1b28ea48eba6300bc84920f4f5b7570c6ae62d0ebb47a8a7eec20135d8269d06" exitCode=0 Feb 03 06:13:57 crc kubenswrapper[4872]: I0203 06:13:57.008755 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerDied","Data":"1b28ea48eba6300bc84920f4f5b7570c6ae62d0ebb47a8a7eec20135d8269d06"} Feb 03 06:13:58 crc kubenswrapper[4872]: I0203 06:13:58.016832 4872 generic.go:334] "Generic (PLEG): container finished" podID="7856ee3e-22ea-4e77-b4aa-69893fc7e281" containerID="e8b14c9daa6a937eda95e4dbe852007e8457c3b39d5922bd9c1875dd2337a7a2" exitCode=0 Feb 03 06:13:58 crc kubenswrapper[4872]: I0203 06:13:58.016906 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerDied","Data":"e8b14c9daa6a937eda95e4dbe852007e8457c3b39d5922bd9c1875dd2337a7a2"} Feb 03 06:13:59 crc kubenswrapper[4872]: I0203 06:13:59.027599 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerStarted","Data":"520aaa551ac6e3e9029a30d2a5717c6cd88d96aa5979a3e20396e32134bda66a"} Feb 03 06:13:59 crc kubenswrapper[4872]: I0203 06:13:59.027887 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerStarted","Data":"c4ef482c486dee4c1e47422ab9e1ddbd9f12f59c38469509dbb227e72ec61a5f"} Feb 03 06:13:59 crc kubenswrapper[4872]: I0203 06:13:59.027897 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerStarted","Data":"a020da1cd15d65f7040f1901c94a534b40571919b6fb2fcb1a0c65d6247a3ed9"} Feb 03 06:13:59 crc kubenswrapper[4872]: I0203 06:13:59.027905 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerStarted","Data":"0f56f6a0f98586b5a1b8ba23cd3055a3ca9727195f41ed87dd1e118704d55e1e"} Feb 03 06:13:59 crc kubenswrapper[4872]: I0203 06:13:59.027914 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerStarted","Data":"870182c7548d85e556141a88dbbb53683775c9de2d01a473b8d96a5846c96a47"} Feb 03 06:14:00 crc kubenswrapper[4872]: I0203 06:14:00.041027 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qrbzc" event={"ID":"7856ee3e-22ea-4e77-b4aa-69893fc7e281","Type":"ContainerStarted","Data":"88c790d065f41c59ead0833846c695c5c448e39ebc8cae161505219d2404a7cf"} Feb 03 06:14:00 crc kubenswrapper[4872]: I0203 06:14:00.042231 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:14:00 crc kubenswrapper[4872]: I0203 06:14:00.088825 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qrbzc" podStartSLOduration=6.495370974 podStartE2EDuration="14.088799214s" podCreationTimestamp="2026-02-03 06:13:46 +0000 UTC" firstStartedPulling="2026-02-03 06:13:48.068431003 +0000 UTC m=+798.651122417" lastFinishedPulling="2026-02-03 06:13:55.661859243 +0000 UTC m=+806.244550657" observedRunningTime="2026-02-03 06:14:00.083753353 +0000 UTC m=+810.666444847" watchObservedRunningTime="2026-02-03 06:14:00.088799214 +0000 UTC m=+810.671490708" Feb 03 06:14:02 crc kubenswrapper[4872]: I0203 06:14:02.921567 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:14:02 crc kubenswrapper[4872]: I0203 06:14:02.977382 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:14:07 crc kubenswrapper[4872]: I0203 06:14:07.365546 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-grdt7" Feb 03 06:14:07 crc kubenswrapper[4872]: I0203 06:14:07.846804 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-tb2wg" Feb 03 06:14:08 crc kubenswrapper[4872]: I0203 06:14:08.978782 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4qtc9" Feb 03 06:14:11 crc kubenswrapper[4872]: I0203 06:14:11.874345 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n96ns"] Feb 03 06:14:11 crc kubenswrapper[4872]: I0203 06:14:11.876580 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n96ns" Feb 03 06:14:11 crc kubenswrapper[4872]: I0203 06:14:11.878642 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-745bt" Feb 03 06:14:11 crc kubenswrapper[4872]: I0203 06:14:11.880016 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 03 06:14:11 crc kubenswrapper[4872]: I0203 06:14:11.881964 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 03 06:14:11 crc kubenswrapper[4872]: I0203 06:14:11.889882 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n96ns"] Feb 03 06:14:11 crc kubenswrapper[4872]: I0203 06:14:11.918062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q886k\" (UniqueName: \"kubernetes.io/projected/0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac-kube-api-access-q886k\") pod \"openstack-operator-index-n96ns\" (UID: \"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac\") " pod="openstack-operators/openstack-operator-index-n96ns" Feb 03 06:14:12 crc kubenswrapper[4872]: I0203 06:14:12.019233 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q886k\" (UniqueName: \"kubernetes.io/projected/0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac-kube-api-access-q886k\") pod \"openstack-operator-index-n96ns\" (UID: \"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac\") " pod="openstack-operators/openstack-operator-index-n96ns" Feb 03 06:14:12 crc kubenswrapper[4872]: I0203 06:14:12.039636 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q886k\" (UniqueName: \"kubernetes.io/projected/0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac-kube-api-access-q886k\") pod \"openstack-operator-index-n96ns\" (UID: \"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac\") " pod="openstack-operators/openstack-operator-index-n96ns" Feb 03 06:14:12 crc kubenswrapper[4872]: I0203 06:14:12.197753 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n96ns" Feb 03 06:14:12 crc kubenswrapper[4872]: I0203 06:14:12.389510 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n96ns"] Feb 03 06:14:13 crc kubenswrapper[4872]: I0203 06:14:13.136118 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n96ns" event={"ID":"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac","Type":"ContainerStarted","Data":"8e2383937492c3875c3a7f35485b62541ba7b92152d25ecf779311b6f6bcb533"} Feb 03 06:14:15 crc kubenswrapper[4872]: I0203 06:14:15.227716 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n96ns"] Feb 03 06:14:15 crc kubenswrapper[4872]: I0203 06:14:15.830200 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5nlb5"] Feb 03 06:14:15 crc kubenswrapper[4872]: I0203 06:14:15.830906 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:15 crc kubenswrapper[4872]: I0203 06:14:15.881157 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-786cp\" (UniqueName: \"kubernetes.io/projected/89cb3a32-685e-40fa-9370-374e91db24dd-kube-api-access-786cp\") pod \"openstack-operator-index-5nlb5\" (UID: \"89cb3a32-685e-40fa-9370-374e91db24dd\") " pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:15 crc kubenswrapper[4872]: I0203 06:14:15.904774 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5nlb5"] Feb 03 06:14:15 crc kubenswrapper[4872]: I0203 06:14:15.982614 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-786cp\" (UniqueName: \"kubernetes.io/projected/89cb3a32-685e-40fa-9370-374e91db24dd-kube-api-access-786cp\") pod \"openstack-operator-index-5nlb5\" (UID: \"89cb3a32-685e-40fa-9370-374e91db24dd\") " pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.001763 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-786cp\" (UniqueName: \"kubernetes.io/projected/89cb3a32-685e-40fa-9370-374e91db24dd-kube-api-access-786cp\") pod \"openstack-operator-index-5nlb5\" (UID: \"89cb3a32-685e-40fa-9370-374e91db24dd\") " pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.148091 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.163228 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n96ns" event={"ID":"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac","Type":"ContainerStarted","Data":"1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82"} Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.163343 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-n96ns" podUID="0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac" containerName="registry-server" containerID="cri-o://1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82" gracePeriod=2 Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.527773 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n96ns" Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.595717 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q886k\" (UniqueName: \"kubernetes.io/projected/0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac-kube-api-access-q886k\") pod \"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac\" (UID: \"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac\") " Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.597526 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5nlb5"] Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.600802 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac-kube-api-access-q886k" (OuterVolumeSpecName: "kube-api-access-q886k") pod "0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac" (UID: "0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac"). InnerVolumeSpecName "kube-api-access-q886k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:14:16 crc kubenswrapper[4872]: W0203 06:14:16.603177 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89cb3a32_685e_40fa_9370_374e91db24dd.slice/crio-a9b0782ead4b2ae72dce68260fa30e3961626f7e50707eb3192b8d34ee57b420 WatchSource:0}: Error finding container a9b0782ead4b2ae72dce68260fa30e3961626f7e50707eb3192b8d34ee57b420: Status 404 returned error can't find the container with id a9b0782ead4b2ae72dce68260fa30e3961626f7e50707eb3192b8d34ee57b420 Feb 03 06:14:16 crc kubenswrapper[4872]: I0203 06:14:16.697471 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q886k\" (UniqueName: \"kubernetes.io/projected/0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac-kube-api-access-q886k\") on node \"crc\" DevicePath \"\"" Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.173922 4872 generic.go:334] "Generic (PLEG): container finished" podID="0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac" containerID="1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82" exitCode=0 Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.174031 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n96ns" event={"ID":"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac","Type":"ContainerDied","Data":"1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82"} Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.174053 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n96ns" Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.174451 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n96ns" event={"ID":"0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac","Type":"ContainerDied","Data":"8e2383937492c3875c3a7f35485b62541ba7b92152d25ecf779311b6f6bcb533"} Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.174556 4872 scope.go:117] "RemoveContainer" containerID="1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82" Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.180415 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5nlb5" event={"ID":"89cb3a32-685e-40fa-9370-374e91db24dd","Type":"ContainerStarted","Data":"003952cc491c348893721a916a154cb017f6e86042e9df26bd7e32012420d79e"} Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.182674 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5nlb5" event={"ID":"89cb3a32-685e-40fa-9370-374e91db24dd","Type":"ContainerStarted","Data":"a9b0782ead4b2ae72dce68260fa30e3961626f7e50707eb3192b8d34ee57b420"} Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.208023 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5nlb5" podStartSLOduration=2.166445888 podStartE2EDuration="2.20799476s" podCreationTimestamp="2026-02-03 06:14:15 +0000 UTC" firstStartedPulling="2026-02-03 06:14:16.607546576 +0000 UTC m=+827.190237990" lastFinishedPulling="2026-02-03 06:14:16.649095448 +0000 UTC m=+827.231786862" observedRunningTime="2026-02-03 06:14:17.204030895 +0000 UTC m=+827.786722389" watchObservedRunningTime="2026-02-03 06:14:17.20799476 +0000 UTC m=+827.790686224" Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.247330 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n96ns"] Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.249599 4872 scope.go:117] "RemoveContainer" containerID="1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82" Feb 03 06:14:17 crc kubenswrapper[4872]: E0203 06:14:17.250078 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82\": container with ID starting with 1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82 not found: ID does not exist" containerID="1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82" Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.250120 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82"} err="failed to get container status \"1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82\": rpc error: code = NotFound desc = could not find container \"1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82\": container with ID starting with 1231651f81468020e797130eac6ca7c7563b246ffc749feb1be1fffc8553ce82 not found: ID does not exist" Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.252850 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-n96ns"] Feb 03 06:14:17 crc kubenswrapper[4872]: I0203 06:14:17.924132 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qrbzc" Feb 03 06:14:18 crc kubenswrapper[4872]: I0203 06:14:18.676485 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac" path="/var/lib/kubelet/pods/0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac/volumes" Feb 03 06:14:26 crc kubenswrapper[4872]: I0203 06:14:26.148254 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:26 crc kubenswrapper[4872]: I0203 06:14:26.148702 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:26 crc kubenswrapper[4872]: I0203 06:14:26.192489 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:26 crc kubenswrapper[4872]: I0203 06:14:26.791481 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5nlb5" Feb 03 06:14:27 crc kubenswrapper[4872]: I0203 06:14:27.899272 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj"] Feb 03 06:14:27 crc kubenswrapper[4872]: E0203 06:14:27.899597 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac" containerName="registry-server" Feb 03 06:14:27 crc kubenswrapper[4872]: I0203 06:14:27.899614 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac" containerName="registry-server" Feb 03 06:14:27 crc kubenswrapper[4872]: I0203 06:14:27.899822 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6c2c43-0f90-4aad-a16f-5f4fdf28b2ac" containerName="registry-server" Feb 03 06:14:27 crc kubenswrapper[4872]: I0203 06:14:27.901071 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:27 crc kubenswrapper[4872]: I0203 06:14:27.916930 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-m9nbg" Feb 03 06:14:27 crc kubenswrapper[4872]: I0203 06:14:27.925040 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj"] Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.074255 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8n79\" (UniqueName: \"kubernetes.io/projected/870be28a-b443-4523-b4a0-c0d773eedaff-kube-api-access-l8n79\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.074543 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-bundle\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.074634 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-util\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.175413 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8n79\" (UniqueName: \"kubernetes.io/projected/870be28a-b443-4523-b4a0-c0d773eedaff-kube-api-access-l8n79\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.175515 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-bundle\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.175555 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-util\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.176103 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-util\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.176376 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-bundle\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.216652 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8n79\" (UniqueName: \"kubernetes.io/projected/870be28a-b443-4523-b4a0-c0d773eedaff-kube-api-access-l8n79\") pod \"198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.257520 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.566428 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj"] Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.759900 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" event={"ID":"870be28a-b443-4523-b4a0-c0d773eedaff","Type":"ContainerStarted","Data":"f93dc8bcbe83be27b5f91523da304ddc387beaaa7b86a2e1de94f6ec6bcc44f7"} Feb 03 06:14:28 crc kubenswrapper[4872]: I0203 06:14:28.760253 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" event={"ID":"870be28a-b443-4523-b4a0-c0d773eedaff","Type":"ContainerStarted","Data":"e63648ecbd6ddec62454734e6324fd647eebf698ea6fca03307b08457a4e3664"} Feb 03 06:14:29 crc kubenswrapper[4872]: I0203 06:14:29.770263 4872 generic.go:334] "Generic (PLEG): container finished" podID="870be28a-b443-4523-b4a0-c0d773eedaff" containerID="f93dc8bcbe83be27b5f91523da304ddc387beaaa7b86a2e1de94f6ec6bcc44f7" exitCode=0 Feb 03 06:14:29 crc kubenswrapper[4872]: I0203 06:14:29.770338 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" event={"ID":"870be28a-b443-4523-b4a0-c0d773eedaff","Type":"ContainerDied","Data":"f93dc8bcbe83be27b5f91523da304ddc387beaaa7b86a2e1de94f6ec6bcc44f7"} Feb 03 06:14:30 crc kubenswrapper[4872]: I0203 06:14:30.783954 4872 generic.go:334] "Generic (PLEG): container finished" podID="870be28a-b443-4523-b4a0-c0d773eedaff" containerID="e7d5b5be4513440e903f745f45231c92110c549a4455daa2657f196c9b74d06a" exitCode=0 Feb 03 06:14:30 crc kubenswrapper[4872]: I0203 06:14:30.784028 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" event={"ID":"870be28a-b443-4523-b4a0-c0d773eedaff","Type":"ContainerDied","Data":"e7d5b5be4513440e903f745f45231c92110c549a4455daa2657f196c9b74d06a"} Feb 03 06:14:31 crc kubenswrapper[4872]: E0203 06:14:31.181014 4872 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod870be28a_b443_4523_b4a0_c0d773eedaff.slice/crio-73cac1692b8134c42b9af6ebd247c33f781ff72d4e63e6d98fbcee84fd542776.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod870be28a_b443_4523_b4a0_c0d773eedaff.slice/crio-conmon-73cac1692b8134c42b9af6ebd247c33f781ff72d4e63e6d98fbcee84fd542776.scope\": RecentStats: unable to find data in memory cache]" Feb 03 06:14:31 crc kubenswrapper[4872]: I0203 06:14:31.794401 4872 generic.go:334] "Generic (PLEG): container finished" podID="870be28a-b443-4523-b4a0-c0d773eedaff" containerID="73cac1692b8134c42b9af6ebd247c33f781ff72d4e63e6d98fbcee84fd542776" exitCode=0 Feb 03 06:14:31 crc kubenswrapper[4872]: I0203 06:14:31.794452 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" event={"ID":"870be28a-b443-4523-b4a0-c0d773eedaff","Type":"ContainerDied","Data":"73cac1692b8134c42b9af6ebd247c33f781ff72d4e63e6d98fbcee84fd542776"} Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.094122 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.259257 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-bundle\") pod \"870be28a-b443-4523-b4a0-c0d773eedaff\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.259556 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8n79\" (UniqueName: \"kubernetes.io/projected/870be28a-b443-4523-b4a0-c0d773eedaff-kube-api-access-l8n79\") pod \"870be28a-b443-4523-b4a0-c0d773eedaff\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.259772 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-util\") pod \"870be28a-b443-4523-b4a0-c0d773eedaff\" (UID: \"870be28a-b443-4523-b4a0-c0d773eedaff\") " Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.261374 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-bundle" (OuterVolumeSpecName: "bundle") pod "870be28a-b443-4523-b4a0-c0d773eedaff" (UID: "870be28a-b443-4523-b4a0-c0d773eedaff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.270895 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870be28a-b443-4523-b4a0-c0d773eedaff-kube-api-access-l8n79" (OuterVolumeSpecName: "kube-api-access-l8n79") pod "870be28a-b443-4523-b4a0-c0d773eedaff" (UID: "870be28a-b443-4523-b4a0-c0d773eedaff"). InnerVolumeSpecName "kube-api-access-l8n79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.283715 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-util" (OuterVolumeSpecName: "util") pod "870be28a-b443-4523-b4a0-c0d773eedaff" (UID: "870be28a-b443-4523-b4a0-c0d773eedaff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.362611 4872 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.362969 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8n79\" (UniqueName: \"kubernetes.io/projected/870be28a-b443-4523-b4a0-c0d773eedaff-kube-api-access-l8n79\") on node \"crc\" DevicePath \"\"" Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.363097 4872 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870be28a-b443-4523-b4a0-c0d773eedaff-util\") on node \"crc\" DevicePath \"\"" Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.812767 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" event={"ID":"870be28a-b443-4523-b4a0-c0d773eedaff","Type":"ContainerDied","Data":"e63648ecbd6ddec62454734e6324fd647eebf698ea6fca03307b08457a4e3664"} Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.812825 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e63648ecbd6ddec62454734e6324fd647eebf698ea6fca03307b08457a4e3664" Feb 03 06:14:33 crc kubenswrapper[4872]: I0203 06:14:33.812885 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.273198 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b"] Feb 03 06:14:35 crc kubenswrapper[4872]: E0203 06:14:35.273611 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870be28a-b443-4523-b4a0-c0d773eedaff" containerName="util" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.273623 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="870be28a-b443-4523-b4a0-c0d773eedaff" containerName="util" Feb 03 06:14:35 crc kubenswrapper[4872]: E0203 06:14:35.273637 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870be28a-b443-4523-b4a0-c0d773eedaff" containerName="pull" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.273642 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="870be28a-b443-4523-b4a0-c0d773eedaff" containerName="pull" Feb 03 06:14:35 crc kubenswrapper[4872]: E0203 06:14:35.273656 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870be28a-b443-4523-b4a0-c0d773eedaff" containerName="extract" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.273661 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="870be28a-b443-4523-b4a0-c0d773eedaff" containerName="extract" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.273776 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="870be28a-b443-4523-b4a0-c0d773eedaff" containerName="extract" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.274111 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.277322 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-rqr2r" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.347162 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b"] Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.389508 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bz4c\" (UniqueName: \"kubernetes.io/projected/69023f68-79e2-4de9-a210-32ecce7b635b-kube-api-access-5bz4c\") pod \"openstack-operator-controller-init-67c68487b9-mxm2b\" (UID: \"69023f68-79e2-4de9-a210-32ecce7b635b\") " pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.490812 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bz4c\" (UniqueName: \"kubernetes.io/projected/69023f68-79e2-4de9-a210-32ecce7b635b-kube-api-access-5bz4c\") pod \"openstack-operator-controller-init-67c68487b9-mxm2b\" (UID: \"69023f68-79e2-4de9-a210-32ecce7b635b\") " pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.521525 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bz4c\" (UniqueName: \"kubernetes.io/projected/69023f68-79e2-4de9-a210-32ecce7b635b-kube-api-access-5bz4c\") pod \"openstack-operator-controller-init-67c68487b9-mxm2b\" (UID: \"69023f68-79e2-4de9-a210-32ecce7b635b\") " pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" Feb 03 06:14:35 crc kubenswrapper[4872]: I0203 06:14:35.588193 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" Feb 03 06:14:36 crc kubenswrapper[4872]: I0203 06:14:36.108972 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b"] Feb 03 06:14:36 crc kubenswrapper[4872]: I0203 06:14:36.857436 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" event={"ID":"69023f68-79e2-4de9-a210-32ecce7b635b","Type":"ContainerStarted","Data":"3c5381fb8cac28df1eefa3d3c01f723c82519b4569eb064473eb9f27aba2d905"} Feb 03 06:14:40 crc kubenswrapper[4872]: I0203 06:14:40.887365 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" event={"ID":"69023f68-79e2-4de9-a210-32ecce7b635b","Type":"ContainerStarted","Data":"54d401e1ef85784765819432ff0ff3badcecf2fcf92a7e688c6605e6fa4618ae"} Feb 03 06:14:40 crc kubenswrapper[4872]: I0203 06:14:40.887895 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" Feb 03 06:14:40 crc kubenswrapper[4872]: I0203 06:14:40.937926 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" podStartSLOduration=1.759163944 podStartE2EDuration="5.937901402s" podCreationTimestamp="2026-02-03 06:14:35 +0000 UTC" firstStartedPulling="2026-02-03 06:14:36.1194254 +0000 UTC m=+846.702116814" lastFinishedPulling="2026-02-03 06:14:40.298162858 +0000 UTC m=+850.880854272" observedRunningTime="2026-02-03 06:14:40.933584789 +0000 UTC m=+851.516276213" watchObservedRunningTime="2026-02-03 06:14:40.937901402 +0000 UTC m=+851.520592846" Feb 03 06:14:45 crc kubenswrapper[4872]: I0203 06:14:45.592265 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-67c68487b9-mxm2b" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.177482 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x"] Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.178595 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.184936 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.185225 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.239552 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x"] Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.288080 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/231294bd-0890-405c-b99a-91471441e1e8-secret-volume\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.288127 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4469\" (UniqueName: \"kubernetes.io/projected/231294bd-0890-405c-b99a-91471441e1e8-kube-api-access-k4469\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.288175 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/231294bd-0890-405c-b99a-91471441e1e8-config-volume\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.395322 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/231294bd-0890-405c-b99a-91471441e1e8-config-volume\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.395431 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/231294bd-0890-405c-b99a-91471441e1e8-secret-volume\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.395455 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4469\" (UniqueName: \"kubernetes.io/projected/231294bd-0890-405c-b99a-91471441e1e8-kube-api-access-k4469\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.396681 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/231294bd-0890-405c-b99a-91471441e1e8-config-volume\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.403542 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/231294bd-0890-405c-b99a-91471441e1e8-secret-volume\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.435195 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4469\" (UniqueName: \"kubernetes.io/projected/231294bd-0890-405c-b99a-91471441e1e8-kube-api-access-k4469\") pod \"collect-profiles-29501655-vb28x\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.497467 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:00 crc kubenswrapper[4872]: I0203 06:15:00.890010 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x"] Feb 03 06:15:01 crc kubenswrapper[4872]: I0203 06:15:01.009054 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" event={"ID":"231294bd-0890-405c-b99a-91471441e1e8","Type":"ContainerStarted","Data":"eee0b9ecfe67e37b77d0f91a6185211df7430d35d690960fe6deb56e1d44f693"} Feb 03 06:15:01 crc kubenswrapper[4872]: I0203 06:15:01.271315 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:15:01 crc kubenswrapper[4872]: I0203 06:15:01.271382 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:15:02 crc kubenswrapper[4872]: I0203 06:15:02.015674 4872 generic.go:334] "Generic (PLEG): container finished" podID="231294bd-0890-405c-b99a-91471441e1e8" containerID="cc48c9f4d8e4b659b49b03e5947fb6b4b03d1d354023d3aca6759aa2a97696d4" exitCode=0 Feb 03 06:15:02 crc kubenswrapper[4872]: I0203 06:15:02.015744 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" event={"ID":"231294bd-0890-405c-b99a-91471441e1e8","Type":"ContainerDied","Data":"cc48c9f4d8e4b659b49b03e5947fb6b4b03d1d354023d3aca6759aa2a97696d4"} Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.319783 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.340876 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/231294bd-0890-405c-b99a-91471441e1e8-config-volume\") pod \"231294bd-0890-405c-b99a-91471441e1e8\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.341035 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4469\" (UniqueName: \"kubernetes.io/projected/231294bd-0890-405c-b99a-91471441e1e8-kube-api-access-k4469\") pod \"231294bd-0890-405c-b99a-91471441e1e8\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.341064 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/231294bd-0890-405c-b99a-91471441e1e8-secret-volume\") pod \"231294bd-0890-405c-b99a-91471441e1e8\" (UID: \"231294bd-0890-405c-b99a-91471441e1e8\") " Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.342219 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/231294bd-0890-405c-b99a-91471441e1e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "231294bd-0890-405c-b99a-91471441e1e8" (UID: "231294bd-0890-405c-b99a-91471441e1e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.351721 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231294bd-0890-405c-b99a-91471441e1e8-kube-api-access-k4469" (OuterVolumeSpecName: "kube-api-access-k4469") pod "231294bd-0890-405c-b99a-91471441e1e8" (UID: "231294bd-0890-405c-b99a-91471441e1e8"). InnerVolumeSpecName "kube-api-access-k4469". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.351732 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/231294bd-0890-405c-b99a-91471441e1e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "231294bd-0890-405c-b99a-91471441e1e8" (UID: "231294bd-0890-405c-b99a-91471441e1e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.442928 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4469\" (UniqueName: \"kubernetes.io/projected/231294bd-0890-405c-b99a-91471441e1e8-kube-api-access-k4469\") on node \"crc\" DevicePath \"\"" Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.442964 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/231294bd-0890-405c-b99a-91471441e1e8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:15:03 crc kubenswrapper[4872]: I0203 06:15:03.442974 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/231294bd-0890-405c-b99a-91471441e1e8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:15:04 crc kubenswrapper[4872]: I0203 06:15:04.027479 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" event={"ID":"231294bd-0890-405c-b99a-91471441e1e8","Type":"ContainerDied","Data":"eee0b9ecfe67e37b77d0f91a6185211df7430d35d690960fe6deb56e1d44f693"} Feb 03 06:15:04 crc kubenswrapper[4872]: I0203 06:15:04.027515 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee0b9ecfe67e37b77d0f91a6185211df7430d35d690960fe6deb56e1d44f693" Feb 03 06:15:04 crc kubenswrapper[4872]: I0203 06:15:04.027608 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.681851 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4"] Feb 03 06:15:05 crc kubenswrapper[4872]: E0203 06:15:05.682360 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231294bd-0890-405c-b99a-91471441e1e8" containerName="collect-profiles" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.682372 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="231294bd-0890-405c-b99a-91471441e1e8" containerName="collect-profiles" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.682472 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="231294bd-0890-405c-b99a-91471441e1e8" containerName="collect-profiles" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.682951 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.685423 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.686108 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.688250 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-72h2l" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.690148 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qv9pc" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.693617 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.700528 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.739078 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.739796 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.745067 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-v49kf" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.746360 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-x2779"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.746918 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.751273 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xxtds" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.759739 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.800722 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.802977 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.813057 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g7c27" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.813490 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6msf6\" (UniqueName: \"kubernetes.io/projected/29e7b8a5-19cf-46ea-a135-019d30af35b3-kube-api-access-6msf6\") pod \"glance-operator-controller-manager-8886f4c47-x2779\" (UID: \"29e7b8a5-19cf-46ea-a135-019d30af35b3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.813542 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2jvb\" (UniqueName: \"kubernetes.io/projected/876b6e4d-32cd-47e3-b748-f9c8ea1d84cf-kube-api-access-t2jvb\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-zznkj\" (UID: \"876b6e4d-32cd-47e3-b748-f9c8ea1d84cf\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.813563 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hw24\" (UniqueName: \"kubernetes.io/projected/7102e0e7-3daa-4610-b931-ca17c7f08461-kube-api-access-4hw24\") pod \"designate-operator-controller-manager-6d9697b7f4-5chl5\" (UID: \"7102e0e7-3daa-4610-b931-ca17c7f08461\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.813605 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rk7m\" (UniqueName: \"kubernetes.io/projected/febe7a4c-e275-4af0-b895-8701c164271c-kube-api-access-6rk7m\") pod \"cinder-operator-controller-manager-8d874c8fc-bw7h4\" (UID: \"febe7a4c-e275-4af0-b895-8701c164271c\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.822727 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-x2779"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.827309 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.835431 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.836228 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.837891 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-sr4zp" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.842725 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9qph7"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.843414 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.851574 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qgr72" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.851776 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.881264 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.898795 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9qph7"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.913590 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.914435 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.915731 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68bpt\" (UniqueName: \"kubernetes.io/projected/394038df-4d8a-41cc-bb90-02dec7dd1fb3-kube-api-access-68bpt\") pod \"heat-operator-controller-manager-69d6db494d-dpd6g\" (UID: \"394038df-4d8a-41cc-bb90-02dec7dd1fb3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.915768 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6msf6\" (UniqueName: \"kubernetes.io/projected/29e7b8a5-19cf-46ea-a135-019d30af35b3-kube-api-access-6msf6\") pod \"glance-operator-controller-manager-8886f4c47-x2779\" (UID: \"29e7b8a5-19cf-46ea-a135-019d30af35b3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.915793 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7gm9\" (UniqueName: \"kubernetes.io/projected/cd3e162d-6733-47c4-b507-c08c577723d0-kube-api-access-w7gm9\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.915820 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.915844 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2jvb\" (UniqueName: \"kubernetes.io/projected/876b6e4d-32cd-47e3-b748-f9c8ea1d84cf-kube-api-access-t2jvb\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-zznkj\" (UID: \"876b6e4d-32cd-47e3-b748-f9c8ea1d84cf\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.915861 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hw24\" (UniqueName: \"kubernetes.io/projected/7102e0e7-3daa-4610-b931-ca17c7f08461-kube-api-access-4hw24\") pod \"designate-operator-controller-manager-6d9697b7f4-5chl5\" (UID: \"7102e0e7-3daa-4610-b931-ca17c7f08461\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.915900 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngnp\" (UniqueName: \"kubernetes.io/projected/8fc2acde-dcbe-4d32-ad0e-cd4627c2152b-kube-api-access-pngnp\") pod \"horizon-operator-controller-manager-5fb775575f-n2mcv\" (UID: \"8fc2acde-dcbe-4d32-ad0e-cd4627c2152b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.915922 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rk7m\" (UniqueName: \"kubernetes.io/projected/febe7a4c-e275-4af0-b895-8701c164271c-kube-api-access-6rk7m\") pod \"cinder-operator-controller-manager-8d874c8fc-bw7h4\" (UID: \"febe7a4c-e275-4af0-b895-8701c164271c\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.921224 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wf9hm" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.931064 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.946987 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.947741 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.953410 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6qrcb" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.954435 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hw24\" (UniqueName: \"kubernetes.io/projected/7102e0e7-3daa-4610-b931-ca17c7f08461-kube-api-access-4hw24\") pod \"designate-operator-controller-manager-6d9697b7f4-5chl5\" (UID: \"7102e0e7-3daa-4610-b931-ca17c7f08461\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.969407 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2jvb\" (UniqueName: \"kubernetes.io/projected/876b6e4d-32cd-47e3-b748-f9c8ea1d84cf-kube-api-access-t2jvb\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-zznkj\" (UID: \"876b6e4d-32cd-47e3-b748-f9c8ea1d84cf\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.975916 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rk7m\" (UniqueName: \"kubernetes.io/projected/febe7a4c-e275-4af0-b895-8701c164271c-kube-api-access-6rk7m\") pod \"cinder-operator-controller-manager-8d874c8fc-bw7h4\" (UID: \"febe7a4c-e275-4af0-b895-8701c164271c\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.975964 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz"] Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.976799 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.989306 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6msf6\" (UniqueName: \"kubernetes.io/projected/29e7b8a5-19cf-46ea-a135-019d30af35b3-kube-api-access-6msf6\") pod \"glance-operator-controller-manager-8886f4c47-x2779\" (UID: \"29e7b8a5-19cf-46ea-a135-019d30af35b3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.998537 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8gp4c" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.999070 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" Feb 03 06:15:05 crc kubenswrapper[4872]: I0203 06:15:05.999633 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.000374 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.007990 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.014591 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.019839 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68bpt\" (UniqueName: \"kubernetes.io/projected/394038df-4d8a-41cc-bb90-02dec7dd1fb3-kube-api-access-68bpt\") pod \"heat-operator-controller-manager-69d6db494d-dpd6g\" (UID: \"394038df-4d8a-41cc-bb90-02dec7dd1fb3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.019886 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-422nw\" (UniqueName: \"kubernetes.io/projected/7319691f-007c-45cd-bd1b-11055339e2ab-kube-api-access-422nw\") pod \"ironic-operator-controller-manager-5f4b8bd54d-lnvft\" (UID: \"7319691f-007c-45cd-bd1b-11055339e2ab\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.019967 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7gm9\" (UniqueName: \"kubernetes.io/projected/cd3e162d-6733-47c4-b507-c08c577723d0-kube-api-access-w7gm9\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.019992 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.020022 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pplzg\" (UniqueName: \"kubernetes.io/projected/71308b40-7203-4586-9a21-9b4621a9aaf7-kube-api-access-pplzg\") pod \"keystone-operator-controller-manager-84f48565d4-9x5w8\" (UID: \"71308b40-7203-4586-9a21-9b4621a9aaf7\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.020112 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngnp\" (UniqueName: \"kubernetes.io/projected/8fc2acde-dcbe-4d32-ad0e-cd4627c2152b-kube-api-access-pngnp\") pod \"horizon-operator-controller-manager-5fb775575f-n2mcv\" (UID: \"8fc2acde-dcbe-4d32-ad0e-cd4627c2152b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" Feb 03 06:15:06 crc kubenswrapper[4872]: E0203 06:15:06.020559 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:06 crc kubenswrapper[4872]: E0203 06:15:06.020647 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert podName:cd3e162d-6733-47c4-b507-c08c577723d0 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:06.520626945 +0000 UTC m=+877.103318439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert") pod "infra-operator-controller-manager-79955696d6-9qph7" (UID: "cd3e162d-6733-47c4-b507-c08c577723d0") : secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.028859 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vg2js" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.055548 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.063006 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.084133 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.089470 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngnp\" (UniqueName: \"kubernetes.io/projected/8fc2acde-dcbe-4d32-ad0e-cd4627c2152b-kube-api-access-pngnp\") pod \"horizon-operator-controller-manager-5fb775575f-n2mcv\" (UID: \"8fc2acde-dcbe-4d32-ad0e-cd4627c2152b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.098877 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.137916 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-422nw\" (UniqueName: \"kubernetes.io/projected/7319691f-007c-45cd-bd1b-11055339e2ab-kube-api-access-422nw\") pod \"ironic-operator-controller-manager-5f4b8bd54d-lnvft\" (UID: \"7319691f-007c-45cd-bd1b-11055339e2ab\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.137998 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pplzg\" (UniqueName: \"kubernetes.io/projected/71308b40-7203-4586-9a21-9b4621a9aaf7-kube-api-access-pplzg\") pod \"keystone-operator-controller-manager-84f48565d4-9x5w8\" (UID: \"71308b40-7203-4586-9a21-9b4621a9aaf7\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.138049 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6t5z\" (UniqueName: \"kubernetes.io/projected/cfe508f3-98be-48d5-bf5b-3cb24a9ba131-kube-api-access-q6t5z\") pod \"mariadb-operator-controller-manager-67bf948998-ww2cx\" (UID: \"cfe508f3-98be-48d5-bf5b-3cb24a9ba131\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.138091 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fnt5\" (UniqueName: \"kubernetes.io/projected/c3aba523-0e11-4e5d-9adf-be5978a1f4e1-kube-api-access-4fnt5\") pod \"manila-operator-controller-manager-7dd968899f-dvqpz\" (UID: \"c3aba523-0e11-4e5d-9adf-be5978a1f4e1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.146630 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68bpt\" (UniqueName: \"kubernetes.io/projected/394038df-4d8a-41cc-bb90-02dec7dd1fb3-kube-api-access-68bpt\") pod \"heat-operator-controller-manager-69d6db494d-dpd6g\" (UID: \"394038df-4d8a-41cc-bb90-02dec7dd1fb3\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.147217 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7gm9\" (UniqueName: \"kubernetes.io/projected/cd3e162d-6733-47c4-b507-c08c577723d0-kube-api-access-w7gm9\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.173839 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.177939 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.262353 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pplzg\" (UniqueName: \"kubernetes.io/projected/71308b40-7203-4586-9a21-9b4621a9aaf7-kube-api-access-pplzg\") pod \"keystone-operator-controller-manager-84f48565d4-9x5w8\" (UID: \"71308b40-7203-4586-9a21-9b4621a9aaf7\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.274196 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/71efcd75-c242-4036-b2e0-fdb117880dd9-kube-api-access-4df2w\") pod \"neutron-operator-controller-manager-585dbc889-czpn5\" (UID: \"71efcd75-c242-4036-b2e0-fdb117880dd9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.274254 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6t5z\" (UniqueName: \"kubernetes.io/projected/cfe508f3-98be-48d5-bf5b-3cb24a9ba131-kube-api-access-q6t5z\") pod \"mariadb-operator-controller-manager-67bf948998-ww2cx\" (UID: \"cfe508f3-98be-48d5-bf5b-3cb24a9ba131\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.274296 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fnt5\" (UniqueName: \"kubernetes.io/projected/c3aba523-0e11-4e5d-9adf-be5978a1f4e1-kube-api-access-4fnt5\") pod \"manila-operator-controller-manager-7dd968899f-dvqpz\" (UID: \"c3aba523-0e11-4e5d-9adf-be5978a1f4e1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.276970 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-422nw\" (UniqueName: \"kubernetes.io/projected/7319691f-007c-45cd-bd1b-11055339e2ab-kube-api-access-422nw\") pod \"ironic-operator-controller-manager-5f4b8bd54d-lnvft\" (UID: \"7319691f-007c-45cd-bd1b-11055339e2ab\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.309081 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s4cj8" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.310434 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.317076 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.317941 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.323807 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.324459 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.326123 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-k6gxc" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.340137 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rl5tf" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.376922 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/71efcd75-c242-4036-b2e0-fdb117880dd9-kube-api-access-4df2w\") pod \"neutron-operator-controller-manager-585dbc889-czpn5\" (UID: \"71efcd75-c242-4036-b2e0-fdb117880dd9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.380330 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6t5z\" (UniqueName: \"kubernetes.io/projected/cfe508f3-98be-48d5-bf5b-3cb24a9ba131-kube-api-access-q6t5z\") pod \"mariadb-operator-controller-manager-67bf948998-ww2cx\" (UID: \"cfe508f3-98be-48d5-bf5b-3cb24a9ba131\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.387721 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fnt5\" (UniqueName: \"kubernetes.io/projected/c3aba523-0e11-4e5d-9adf-be5978a1f4e1-kube-api-access-4fnt5\") pod \"manila-operator-controller-manager-7dd968899f-dvqpz\" (UID: \"c3aba523-0e11-4e5d-9adf-be5978a1f4e1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.387756 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.420063 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.429033 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.444303 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.467029 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.479758 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fkn\" (UniqueName: \"kubernetes.io/projected/e7d3c449-bd59-48b2-9047-2c7589cdf51a-kube-api-access-96fkn\") pod \"octavia-operator-controller-manager-6687f8d877-p475q\" (UID: \"e7d3c449-bd59-48b2-9047-2c7589cdf51a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.480071 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xnb\" (UniqueName: \"kubernetes.io/projected/026cffca-2976-4ba1-8bb6-3e86c4521166-kube-api-access-45xnb\") pod \"nova-operator-controller-manager-55bff696bd-rjz4z\" (UID: \"026cffca-2976-4ba1-8bb6-3e86c4521166\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.482353 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4df2w\" (UniqueName: \"kubernetes.io/projected/71efcd75-c242-4036-b2e0-fdb117880dd9-kube-api-access-4df2w\") pod \"neutron-operator-controller-manager-585dbc889-czpn5\" (UID: \"71efcd75-c242-4036-b2e0-fdb117880dd9\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.498246 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.499230 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.511624 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.512463 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.512986 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.513558 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.514534 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.514839 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.529351 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.529411 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-x8xps" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.529575 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-z8fgs" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.529639 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-pgjxf" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.537532 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ftwqn" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.537771 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.540110 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.563721 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.566800 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.573829 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.581485 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xnb\" (UniqueName: \"kubernetes.io/projected/026cffca-2976-4ba1-8bb6-3e86c4521166-kube-api-access-45xnb\") pod \"nova-operator-controller-manager-55bff696bd-rjz4z\" (UID: \"026cffca-2976-4ba1-8bb6-3e86c4521166\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.581532 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd6nw\" (UniqueName: \"kubernetes.io/projected/9bb5cb68-4c55-4c47-beb8-a9caa56db1b3-kube-api-access-xd6nw\") pod \"placement-operator-controller-manager-5b964cf4cd-vl4tx\" (UID: \"9bb5cb68-4c55-4c47-beb8-a9caa56db1b3\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.581553 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59mp2\" (UniqueName: \"kubernetes.io/projected/d9a67e95-335b-40cf-af71-4f3fd69a1fd9-kube-api-access-59mp2\") pod \"swift-operator-controller-manager-68fc8c869-w7ncd\" (UID: \"d9a67e95-335b-40cf-af71-4f3fd69a1fd9\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.581590 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.581622 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmj6\" (UniqueName: \"kubernetes.io/projected/08ed93ce-02ea-45af-b481-69ed92f5aff5-kube-api-access-fpmj6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.581649 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtp4\" (UniqueName: \"kubernetes.io/projected/584404c0-4ffd-43f5-a06f-009650dc0cc9-kube-api-access-qmtp4\") pod \"ovn-operator-controller-manager-788c46999f-lkxq4\" (UID: \"584404c0-4ffd-43f5-a06f-009650dc0cc9\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.581670 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.581705 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96fkn\" (UniqueName: \"kubernetes.io/projected/e7d3c449-bd59-48b2-9047-2c7589cdf51a-kube-api-access-96fkn\") pod \"octavia-operator-controller-manager-6687f8d877-p475q\" (UID: \"e7d3c449-bd59-48b2-9047-2c7589cdf51a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" Feb 03 06:15:06 crc kubenswrapper[4872]: E0203 06:15:06.583305 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:06 crc kubenswrapper[4872]: E0203 06:15:06.583352 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert podName:cd3e162d-6733-47c4-b507-c08c577723d0 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:07.583338477 +0000 UTC m=+878.166029891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert") pod "infra-operator-controller-manager-79955696d6-9qph7" (UID: "cd3e162d-6733-47c4-b507-c08c577723d0") : secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.583568 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.605762 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.606702 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.609971 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.623152 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nln7b" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.627950 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96fkn\" (UniqueName: \"kubernetes.io/projected/e7d3c449-bd59-48b2-9047-2c7589cdf51a-kube-api-access-96fkn\") pod \"octavia-operator-controller-manager-6687f8d877-p475q\" (UID: \"e7d3c449-bd59-48b2-9047-2c7589cdf51a\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.637820 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.643848 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.644316 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xnb\" (UniqueName: \"kubernetes.io/projected/026cffca-2976-4ba1-8bb6-3e86c4521166-kube-api-access-45xnb\") pod \"nova-operator-controller-manager-55bff696bd-rjz4z\" (UID: \"026cffca-2976-4ba1-8bb6-3e86c4521166\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.653181 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.653985 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.665182 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-whthd" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.666797 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.693785 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpmj6\" (UniqueName: \"kubernetes.io/projected/08ed93ce-02ea-45af-b481-69ed92f5aff5-kube-api-access-fpmj6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.693829 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmtp4\" (UniqueName: \"kubernetes.io/projected/584404c0-4ffd-43f5-a06f-009650dc0cc9-kube-api-access-qmtp4\") pod \"ovn-operator-controller-manager-788c46999f-lkxq4\" (UID: \"584404c0-4ffd-43f5-a06f-009650dc0cc9\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.693865 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.693922 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd6nw\" (UniqueName: \"kubernetes.io/projected/9bb5cb68-4c55-4c47-beb8-a9caa56db1b3-kube-api-access-xd6nw\") pod \"placement-operator-controller-manager-5b964cf4cd-vl4tx\" (UID: \"9bb5cb68-4c55-4c47-beb8-a9caa56db1b3\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.693943 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59mp2\" (UniqueName: \"kubernetes.io/projected/d9a67e95-335b-40cf-af71-4f3fd69a1fd9-kube-api-access-59mp2\") pod \"swift-operator-controller-manager-68fc8c869-w7ncd\" (UID: \"d9a67e95-335b-40cf-af71-4f3fd69a1fd9\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" Feb 03 06:15:06 crc kubenswrapper[4872]: E0203 06:15:06.694969 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:06 crc kubenswrapper[4872]: E0203 06:15:06.695006 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert podName:08ed93ce-02ea-45af-b481-69ed92f5aff5 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:07.194992717 +0000 UTC m=+877.777684131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" (UID: "08ed93ce-02ea-45af-b481-69ed92f5aff5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.695287 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.699554 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-t4l5f"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.700350 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.705590 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-cmtfr" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.743638 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-t4l5f"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.748281 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmtp4\" (UniqueName: \"kubernetes.io/projected/584404c0-4ffd-43f5-a06f-009650dc0cc9-kube-api-access-qmtp4\") pod \"ovn-operator-controller-manager-788c46999f-lkxq4\" (UID: \"584404c0-4ffd-43f5-a06f-009650dc0cc9\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.753472 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpmj6\" (UniqueName: \"kubernetes.io/projected/08ed93ce-02ea-45af-b481-69ed92f5aff5-kube-api-access-fpmj6\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.763548 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.765094 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59mp2\" (UniqueName: \"kubernetes.io/projected/d9a67e95-335b-40cf-af71-4f3fd69a1fd9-kube-api-access-59mp2\") pod \"swift-operator-controller-manager-68fc8c869-w7ncd\" (UID: \"d9a67e95-335b-40cf-af71-4f3fd69a1fd9\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.773000 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd6nw\" (UniqueName: \"kubernetes.io/projected/9bb5cb68-4c55-4c47-beb8-a9caa56db1b3-kube-api-access-xd6nw\") pod \"placement-operator-controller-manager-5b964cf4cd-vl4tx\" (UID: \"9bb5cb68-4c55-4c47-beb8-a9caa56db1b3\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.799853 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj47j\" (UniqueName: \"kubernetes.io/projected/39f4fe96-ca54-4135-9eb3-e40a187e54a4-kube-api-access-hj47j\") pod \"watcher-operator-controller-manager-564965969-t4l5f\" (UID: \"39f4fe96-ca54-4135-9eb3-e40a187e54a4\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.799898 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw7dt\" (UniqueName: \"kubernetes.io/projected/e0b27752-d9b8-4bd6-92c4-253508657db5-kube-api-access-kw7dt\") pod \"test-operator-controller-manager-56f8bfcd9f-rlwfv\" (UID: \"e0b27752-d9b8-4bd6-92c4-253508657db5\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.799921 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkg98\" (UniqueName: \"kubernetes.io/projected/d843f756-0ec0-4a79-b34d-14e257e22102-kube-api-access-zkg98\") pod \"telemetry-operator-controller-manager-64b5b76f97-29rp5\" (UID: \"d843f756-0ec0-4a79-b34d-14e257e22102\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.864214 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t"] Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.871229 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.873124 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.878067 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.878249 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.878475 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-z8znp" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.897274 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.944511 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj47j\" (UniqueName: \"kubernetes.io/projected/39f4fe96-ca54-4135-9eb3-e40a187e54a4-kube-api-access-hj47j\") pod \"watcher-operator-controller-manager-564965969-t4l5f\" (UID: \"39f4fe96-ca54-4135-9eb3-e40a187e54a4\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.955484 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw7dt\" (UniqueName: \"kubernetes.io/projected/e0b27752-d9b8-4bd6-92c4-253508657db5-kube-api-access-kw7dt\") pod \"test-operator-controller-manager-56f8bfcd9f-rlwfv\" (UID: \"e0b27752-d9b8-4bd6-92c4-253508657db5\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.955625 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkg98\" (UniqueName: \"kubernetes.io/projected/d843f756-0ec0-4a79-b34d-14e257e22102-kube-api-access-zkg98\") pod \"telemetry-operator-controller-manager-64b5b76f97-29rp5\" (UID: \"d843f756-0ec0-4a79-b34d-14e257e22102\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" Feb 03 06:15:06 crc kubenswrapper[4872]: I0203 06:15:06.950213 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.021078 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj47j\" (UniqueName: \"kubernetes.io/projected/39f4fe96-ca54-4135-9eb3-e40a187e54a4-kube-api-access-hj47j\") pod \"watcher-operator-controller-manager-564965969-t4l5f\" (UID: \"39f4fe96-ca54-4135-9eb3-e40a187e54a4\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.024958 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.037825 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkg98\" (UniqueName: \"kubernetes.io/projected/d843f756-0ec0-4a79-b34d-14e257e22102-kube-api-access-zkg98\") pod \"telemetry-operator-controller-manager-64b5b76f97-29rp5\" (UID: \"d843f756-0ec0-4a79-b34d-14e257e22102\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.056217 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw7dt\" (UniqueName: \"kubernetes.io/projected/e0b27752-d9b8-4bd6-92c4-253508657db5-kube-api-access-kw7dt\") pod \"test-operator-controller-manager-56f8bfcd9f-rlwfv\" (UID: \"e0b27752-d9b8-4bd6-92c4-253508657db5\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.057394 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.057441 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.057535 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zgr\" (UniqueName: \"kubernetes.io/projected/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-kube-api-access-s7zgr\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.144925 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.158377 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zgr\" (UniqueName: \"kubernetes.io/projected/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-kube-api-access-s7zgr\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.158476 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.158508 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.158660 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.158731 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:07.658714062 +0000 UTC m=+878.241405476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "webhook-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.158757 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.158813 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:07.658797004 +0000 UTC m=+878.241488418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "metrics-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: W0203 06:15:07.195930 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfebe7a4c_e275_4af0_b895_8701c164271c.slice/crio-59ef0baae7945434a7f396db58104ccf485f71e39c4f016a51c26a227d5b799e WatchSource:0}: Error finding container 59ef0baae7945434a7f396db58104ccf485f71e39c4f016a51c26a227d5b799e: Status 404 returned error can't find the container with id 59ef0baae7945434a7f396db58104ccf485f71e39c4f016a51c26a227d5b799e Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.204489 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zgr\" (UniqueName: \"kubernetes.io/projected/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-kube-api-access-s7zgr\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.209918 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.210749 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.213754 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6xzj2" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.241847 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.259489 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.259752 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.260066 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert podName:08ed93ce-02ea-45af-b481-69ed92f5aff5 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:08.260048774 +0000 UTC m=+878.842740188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" (UID: "08ed93ce-02ea-45af-b481-69ed92f5aff5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.268623 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.272763 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.302374 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.302795 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.310407 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.362276 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7p8t\" (UniqueName: \"kubernetes.io/projected/71ed58d0-78f2-497b-8802-3647a361c99b-kube-api-access-v7p8t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mvdpp\" (UID: \"71ed58d0-78f2-497b-8802-3647a361c99b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.464278 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7p8t\" (UniqueName: \"kubernetes.io/projected/71ed58d0-78f2-497b-8802-3647a361c99b-kube-api-access-v7p8t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mvdpp\" (UID: \"71ed58d0-78f2-497b-8802-3647a361c99b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.491857 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7p8t\" (UniqueName: \"kubernetes.io/projected/71ed58d0-78f2-497b-8802-3647a361c99b-kube-api-access-v7p8t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mvdpp\" (UID: \"71ed58d0-78f2-497b-8802-3647a361c99b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.578785 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.592662 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-x2779"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.619885 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv"] Feb 03 06:15:07 crc kubenswrapper[4872]: W0203 06:15:07.621185 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e7b8a5_19cf_46ea_a135_019d30af35b3.slice/crio-3776f7ee9327da8fd5f9d32185a70f92b2cb0dd52d1f52f28c663837089f6a58 WatchSource:0}: Error finding container 3776f7ee9327da8fd5f9d32185a70f92b2cb0dd52d1f52f28c663837089f6a58: Status 404 returned error can't find the container with id 3776f7ee9327da8fd5f9d32185a70f92b2cb0dd52d1f52f28c663837089f6a58 Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.666619 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.666655 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.666677 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.666852 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.666897 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert podName:cd3e162d-6733-47c4-b507-c08c577723d0 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:09.666883191 +0000 UTC m=+880.249574605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert") pod "infra-operator-controller-manager-79955696d6-9qph7" (UID: "cd3e162d-6733-47c4-b507-c08c577723d0") : secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.667183 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.667205 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:08.667198608 +0000 UTC m=+879.249890022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "metrics-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.667237 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: E0203 06:15:07.667253 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:08.667248059 +0000 UTC m=+879.249939473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "webhook-server-cert" not found Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.796914 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.823169 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz"] Feb 03 06:15:07 crc kubenswrapper[4872]: I0203 06:15:07.912890 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.003390 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.026533 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5"] Feb 03 06:15:08 crc kubenswrapper[4872]: W0203 06:15:08.036080 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71efcd75_c242_4036_b2e0_fdb117880dd9.slice/crio-bdd044357ece62e4674ddb70e41b5d870cca816a59d86e458e8f51c955c018f2 WatchSource:0}: Error finding container bdd044357ece62e4674ddb70e41b5d870cca816a59d86e458e8f51c955c018f2: Status 404 returned error can't find the container with id bdd044357ece62e4674ddb70e41b5d870cca816a59d86e458e8f51c955c018f2 Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.087628 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" event={"ID":"8fc2acde-dcbe-4d32-ad0e-cd4627c2152b","Type":"ContainerStarted","Data":"f66ad3efa2441629a2345e134f5f6e494481c658b364628e607ab79ee755ccbc"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.088572 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" event={"ID":"cfe508f3-98be-48d5-bf5b-3cb24a9ba131","Type":"ContainerStarted","Data":"7af35bc03e3d9387c3637760c1bf08b99d702071ebecc4ee7b39b97f42329eb9"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.093081 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" event={"ID":"876b6e4d-32cd-47e3-b748-f9c8ea1d84cf","Type":"ContainerStarted","Data":"89a1ce5a9e3c05ab37458e9f8c912999293f5459ccdff0fae122a4d894d6b545"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.094953 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" event={"ID":"7102e0e7-3daa-4610-b931-ca17c7f08461","Type":"ContainerStarted","Data":"8556fa14c34b33d5263e30686a50c33fe83dfe2e21da5e536b11fd4435b56548"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.095748 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" event={"ID":"c3aba523-0e11-4e5d-9adf-be5978a1f4e1","Type":"ContainerStarted","Data":"b54e8890f9783cab356e95fdc47574b7e5876a9dc94085ac323a0f3a1c7557df"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.104996 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" event={"ID":"71efcd75-c242-4036-b2e0-fdb117880dd9","Type":"ContainerStarted","Data":"bdd044357ece62e4674ddb70e41b5d870cca816a59d86e458e8f51c955c018f2"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.106938 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" event={"ID":"71308b40-7203-4586-9a21-9b4621a9aaf7","Type":"ContainerStarted","Data":"5fec82b7c9438ff9f45b85bf03e6c681d8445caa1881eac6b2d962800ce20c01"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.109603 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" event={"ID":"29e7b8a5-19cf-46ea-a135-019d30af35b3","Type":"ContainerStarted","Data":"3776f7ee9327da8fd5f9d32185a70f92b2cb0dd52d1f52f28c663837089f6a58"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.110714 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" event={"ID":"febe7a4c-e275-4af0-b895-8701c164271c","Type":"ContainerStarted","Data":"59ef0baae7945434a7f396db58104ccf485f71e39c4f016a51c26a227d5b799e"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.112254 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" event={"ID":"394038df-4d8a-41cc-bb90-02dec7dd1fb3","Type":"ContainerStarted","Data":"87b82049a6e2c93485e0374a6111ebe439be7d50b63f4ec1e0293c700798e309"} Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.194460 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.205447 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.212932 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.212981 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.228408 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.243878 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.272665 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-t4l5f"] Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.275579 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.275753 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.275844 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert podName:08ed93ce-02ea-45af-b481-69ed92f5aff5 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:10.275830028 +0000 UTC m=+880.858521442 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" (UID: "08ed93ce-02ea-45af-b481-69ed92f5aff5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:08 crc kubenswrapper[4872]: W0203 06:15:08.288837 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod026cffca_2976_4ba1_8bb6_3e86c4521166.slice/crio-15eeb831f4443405bdb33f23697a907e33d4d660886f4588455fc33ae0b2bd88 WatchSource:0}: Error finding container 15eeb831f4443405bdb33f23697a907e33d4d660886f4588455fc33ae0b2bd88: Status 404 returned error can't find the container with id 15eeb831f4443405bdb33f23697a907e33d4d660886f4588455fc33ae0b2bd88 Feb 03 06:15:08 crc kubenswrapper[4872]: W0203 06:15:08.290118 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd843f756_0ec0_4a79_b34d_14e257e22102.slice/crio-a4a7c69acf99ed32dba1bfd4401433b1c9d0723dd82a1745f6406e08411f7601 WatchSource:0}: Error finding container a4a7c69acf99ed32dba1bfd4401433b1c9d0723dd82a1745f6406e08411f7601: Status 404 returned error can't find the container with id a4a7c69acf99ed32dba1bfd4401433b1c9d0723dd82a1745f6406e08411f7601 Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.290390 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft"] Feb 03 06:15:08 crc kubenswrapper[4872]: W0203 06:15:08.291180 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7319691f_007c_45cd_bd1b_11055339e2ab.slice/crio-4f53cb6ddddde26e1837e6bd628a176426bacb6ae78be7728f520c95a751d39c WatchSource:0}: Error finding container 4f53cb6ddddde26e1837e6bd628a176426bacb6ae78be7728f520c95a751d39c: Status 404 returned error can't find the container with id 4f53cb6ddddde26e1837e6bd628a176426bacb6ae78be7728f520c95a751d39c Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.293576 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkg98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-29rp5_openstack-operators(d843f756-0ec0-4a79-b34d-14e257e22102): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.295370 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" podUID="d843f756-0ec0-4a79-b34d-14e257e22102" Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.296331 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-422nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-lnvft_openstack-operators(7319691f-007c-45cd-bd1b-11055339e2ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.297721 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" podUID="7319691f-007c-45cd-bd1b-11055339e2ab" Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.300298 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45xnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-rjz4z_openstack-operators(026cffca-2976-4ba1-8bb6-3e86c4521166): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.301526 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" podUID="026cffca-2976-4ba1-8bb6-3e86c4521166" Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.353118 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv"] Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.368642 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kw7dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-rlwfv_openstack-operators(e0b27752-d9b8-4bd6-92c4-253508657db5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.370628 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" podUID="e0b27752-d9b8-4bd6-92c4-253508657db5" Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.416097 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp"] Feb 03 06:15:08 crc kubenswrapper[4872]: W0203 06:15:08.439758 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71ed58d0_78f2_497b_8802_3647a361c99b.slice/crio-18303b997bf76dca04576183d67cf4567de62c38bef0432205fdfa9cec8b002b WatchSource:0}: Error finding container 18303b997bf76dca04576183d67cf4567de62c38bef0432205fdfa9cec8b002b: Status 404 returned error can't find the container with id 18303b997bf76dca04576183d67cf4567de62c38bef0432205fdfa9cec8b002b Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.681379 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:08 crc kubenswrapper[4872]: I0203 06:15:08.681422 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.681569 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.681615 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:10.681601578 +0000 UTC m=+881.264292982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "webhook-server-cert" not found Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.681933 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 06:15:08 crc kubenswrapper[4872]: E0203 06:15:08.681962 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:10.681955066 +0000 UTC m=+881.264646480 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "metrics-server-cert" not found Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.164460 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" event={"ID":"e7d3c449-bd59-48b2-9047-2c7589cdf51a","Type":"ContainerStarted","Data":"8bfdc6e2fecc84d41bfa36bc1a7f505c86802941008239256f52c61579bd1790"} Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.193054 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" event={"ID":"e0b27752-d9b8-4bd6-92c4-253508657db5","Type":"ContainerStarted","Data":"b5d801f9ad83caffcb90113f2f222d9b88a343ef8265d27a25b6f7ea9a1de031"} Feb 03 06:15:09 crc kubenswrapper[4872]: E0203 06:15:09.207561 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" podUID="e0b27752-d9b8-4bd6-92c4-253508657db5" Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.216493 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" event={"ID":"9bb5cb68-4c55-4c47-beb8-a9caa56db1b3","Type":"ContainerStarted","Data":"fe1748d61defa56e24ed6adc0638f3296382c4fef78f03d1ac4f1dbd01088877"} Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.221754 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" event={"ID":"584404c0-4ffd-43f5-a06f-009650dc0cc9","Type":"ContainerStarted","Data":"59e45dba2bedade1c28d4efa134cf27c27d59f195e60de0ad1722d66dbeef0f8"} Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.223010 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" event={"ID":"026cffca-2976-4ba1-8bb6-3e86c4521166","Type":"ContainerStarted","Data":"15eeb831f4443405bdb33f23697a907e33d4d660886f4588455fc33ae0b2bd88"} Feb 03 06:15:09 crc kubenswrapper[4872]: E0203 06:15:09.227723 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" podUID="026cffca-2976-4ba1-8bb6-3e86c4521166" Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.230837 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" event={"ID":"71ed58d0-78f2-497b-8802-3647a361c99b","Type":"ContainerStarted","Data":"18303b997bf76dca04576183d67cf4567de62c38bef0432205fdfa9cec8b002b"} Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.240879 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" event={"ID":"d843f756-0ec0-4a79-b34d-14e257e22102","Type":"ContainerStarted","Data":"a4a7c69acf99ed32dba1bfd4401433b1c9d0723dd82a1745f6406e08411f7601"} Feb 03 06:15:09 crc kubenswrapper[4872]: E0203 06:15:09.242225 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" podUID="d843f756-0ec0-4a79-b34d-14e257e22102" Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.243310 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" event={"ID":"d9a67e95-335b-40cf-af71-4f3fd69a1fd9","Type":"ContainerStarted","Data":"5a86152bf96b1059a573ed263da49940d9d4ca3a1484887fd57fa565d304f829"} Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.260920 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" event={"ID":"39f4fe96-ca54-4135-9eb3-e40a187e54a4","Type":"ContainerStarted","Data":"d2b8b4cab80ddc1ba22d177155c4657563090909d548c074b7c4818ab5ca62e5"} Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.275755 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" event={"ID":"7319691f-007c-45cd-bd1b-11055339e2ab","Type":"ContainerStarted","Data":"4f53cb6ddddde26e1837e6bd628a176426bacb6ae78be7728f520c95a751d39c"} Feb 03 06:15:09 crc kubenswrapper[4872]: E0203 06:15:09.291225 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" podUID="7319691f-007c-45cd-bd1b-11055339e2ab" Feb 03 06:15:09 crc kubenswrapper[4872]: I0203 06:15:09.722952 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:09 crc kubenswrapper[4872]: E0203 06:15:09.723126 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:09 crc kubenswrapper[4872]: E0203 06:15:09.723216 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert podName:cd3e162d-6733-47c4-b507-c08c577723d0 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:13.723196229 +0000 UTC m=+884.305887643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert") pod "infra-operator-controller-manager-79955696d6-9qph7" (UID: "cd3e162d-6733-47c4-b507-c08c577723d0") : secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.291196 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" podUID="026cffca-2976-4ba1-8bb6-3e86c4521166" Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.291530 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" podUID="7319691f-007c-45cd-bd1b-11055339e2ab" Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.291568 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" podUID="d843f756-0ec0-4a79-b34d-14e257e22102" Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.291607 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" podUID="e0b27752-d9b8-4bd6-92c4-253508657db5" Feb 03 06:15:10 crc kubenswrapper[4872]: I0203 06:15:10.335954 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.336494 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.336553 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert podName:08ed93ce-02ea-45af-b481-69ed92f5aff5 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:14.336537331 +0000 UTC m=+884.919228745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" (UID: "08ed93ce-02ea-45af-b481-69ed92f5aff5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:10 crc kubenswrapper[4872]: I0203 06:15:10.742535 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.742749 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 06:15:10 crc kubenswrapper[4872]: I0203 06:15:10.742779 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.742897 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.742924 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:14.742837564 +0000 UTC m=+885.325528978 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "metrics-server-cert" not found Feb 03 06:15:10 crc kubenswrapper[4872]: E0203 06:15:10.742964 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:14.742948587 +0000 UTC m=+885.325640001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "webhook-server-cert" not found Feb 03 06:15:13 crc kubenswrapper[4872]: I0203 06:15:13.804039 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:13 crc kubenswrapper[4872]: E0203 06:15:13.804719 4872 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:13 crc kubenswrapper[4872]: E0203 06:15:13.804773 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert podName:cd3e162d-6733-47c4-b507-c08c577723d0 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:21.804758474 +0000 UTC m=+892.387449888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert") pod "infra-operator-controller-manager-79955696d6-9qph7" (UID: "cd3e162d-6733-47c4-b507-c08c577723d0") : secret "infra-operator-webhook-server-cert" not found Feb 03 06:15:14 crc kubenswrapper[4872]: I0203 06:15:14.428699 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:14 crc kubenswrapper[4872]: E0203 06:15:14.428909 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:14 crc kubenswrapper[4872]: E0203 06:15:14.428954 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert podName:08ed93ce-02ea-45af-b481-69ed92f5aff5 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:22.428941966 +0000 UTC m=+893.011633380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" (UID: "08ed93ce-02ea-45af-b481-69ed92f5aff5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:14 crc kubenswrapper[4872]: I0203 06:15:14.833142 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:14 crc kubenswrapper[4872]: I0203 06:15:14.833189 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:14 crc kubenswrapper[4872]: E0203 06:15:14.833357 4872 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 06:15:14 crc kubenswrapper[4872]: E0203 06:15:14.833355 4872 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 06:15:14 crc kubenswrapper[4872]: E0203 06:15:14.833399 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:22.833385973 +0000 UTC m=+893.416077387 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "webhook-server-cert" not found Feb 03 06:15:14 crc kubenswrapper[4872]: E0203 06:15:14.833466 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs podName:9ee72576-2dc3-4b0b-ba3d-38aa27fba615 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:22.833438046 +0000 UTC m=+893.416129490 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs") pod "openstack-operator-controller-manager-5bc755b6c5-ptv7t" (UID: "9ee72576-2dc3-4b0b-ba3d-38aa27fba615") : secret "metrics-server-cert" not found Feb 03 06:15:21 crc kubenswrapper[4872]: I0203 06:15:21.850150 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:21 crc kubenswrapper[4872]: I0203 06:15:21.865229 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd3e162d-6733-47c4-b507-c08c577723d0-cert\") pod \"infra-operator-controller-manager-79955696d6-9qph7\" (UID: \"cd3e162d-6733-47c4-b507-c08c577723d0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:22 crc kubenswrapper[4872]: I0203 06:15:22.100381 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:22 crc kubenswrapper[4872]: I0203 06:15:22.468012 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:22 crc kubenswrapper[4872]: E0203 06:15:22.468163 4872 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:22 crc kubenswrapper[4872]: E0203 06:15:22.468217 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert podName:08ed93ce-02ea-45af-b481-69ed92f5aff5 nodeName:}" failed. No retries permitted until 2026-02-03 06:15:38.468202613 +0000 UTC m=+909.050894017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" (UID: "08ed93ce-02ea-45af-b481-69ed92f5aff5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 06:15:22 crc kubenswrapper[4872]: E0203 06:15:22.544918 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Feb 03 06:15:22 crc kubenswrapper[4872]: E0203 06:15:22.545077 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59mp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-w7ncd_openstack-operators(d9a67e95-335b-40cf-af71-4f3fd69a1fd9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:22 crc kubenswrapper[4872]: E0203 06:15:22.546255 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" podUID="d9a67e95-335b-40cf-af71-4f3fd69a1fd9" Feb 03 06:15:22 crc kubenswrapper[4872]: I0203 06:15:22.873373 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:22 crc kubenswrapper[4872]: I0203 06:15:22.873436 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:22 crc kubenswrapper[4872]: I0203 06:15:22.878938 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-webhook-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:22 crc kubenswrapper[4872]: I0203 06:15:22.894319 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ee72576-2dc3-4b0b-ba3d-38aa27fba615-metrics-certs\") pod \"openstack-operator-controller-manager-5bc755b6c5-ptv7t\" (UID: \"9ee72576-2dc3-4b0b-ba3d-38aa27fba615\") " pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:23 crc kubenswrapper[4872]: I0203 06:15:23.064073 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:23 crc kubenswrapper[4872]: E0203 06:15:23.402611 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" podUID="d9a67e95-335b-40cf-af71-4f3fd69a1fd9" Feb 03 06:15:23 crc kubenswrapper[4872]: E0203 06:15:23.705311 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Feb 03 06:15:23 crc kubenswrapper[4872]: E0203 06:15:23.705727 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68bpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-dpd6g_openstack-operators(394038df-4d8a-41cc-bb90-02dec7dd1fb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:23 crc kubenswrapper[4872]: E0203 06:15:23.707181 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" podUID="394038df-4d8a-41cc-bb90-02dec7dd1fb3" Feb 03 06:15:24 crc kubenswrapper[4872]: E0203 06:15:24.166744 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Feb 03 06:15:24 crc kubenswrapper[4872]: E0203 06:15:24.166947 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pngnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-n2mcv_openstack-operators(8fc2acde-dcbe-4d32-ad0e-cd4627c2152b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:24 crc kubenswrapper[4872]: E0203 06:15:24.168791 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" podUID="8fc2acde-dcbe-4d32-ad0e-cd4627c2152b" Feb 03 06:15:24 crc kubenswrapper[4872]: E0203 06:15:24.407935 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" podUID="394038df-4d8a-41cc-bb90-02dec7dd1fb3" Feb 03 06:15:24 crc kubenswrapper[4872]: E0203 06:15:24.408820 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" podUID="8fc2acde-dcbe-4d32-ad0e-cd4627c2152b" Feb 03 06:15:25 crc kubenswrapper[4872]: E0203 06:15:25.650542 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Feb 03 06:15:25 crc kubenswrapper[4872]: E0203 06:15:25.650835 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4df2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-czpn5_openstack-operators(71efcd75-c242-4036-b2e0-fdb117880dd9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:25 crc kubenswrapper[4872]: E0203 06:15:25.652074 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" podUID="71efcd75-c242-4036-b2e0-fdb117880dd9" Feb 03 06:15:26 crc kubenswrapper[4872]: E0203 06:15:26.423060 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" podUID="71efcd75-c242-4036-b2e0-fdb117880dd9" Feb 03 06:15:26 crc kubenswrapper[4872]: E0203 06:15:26.867315 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Feb 03 06:15:26 crc kubenswrapper[4872]: E0203 06:15:26.867623 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hw24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-5chl5_openstack-operators(7102e0e7-3daa-4610-b931-ca17c7f08461): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:26 crc kubenswrapper[4872]: E0203 06:15:26.869120 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" podUID="7102e0e7-3daa-4610-b931-ca17c7f08461" Feb 03 06:15:27 crc kubenswrapper[4872]: E0203 06:15:27.426233 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" podUID="7102e0e7-3daa-4610-b931-ca17c7f08461" Feb 03 06:15:28 crc kubenswrapper[4872]: E0203 06:15:28.733176 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Feb 03 06:15:28 crc kubenswrapper[4872]: E0203 06:15:28.733410 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fnt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-dvqpz_openstack-operators(c3aba523-0e11-4e5d-9adf-be5978a1f4e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:28 crc kubenswrapper[4872]: E0203 06:15:28.735503 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" podUID="c3aba523-0e11-4e5d-9adf-be5978a1f4e1" Feb 03 06:15:29 crc kubenswrapper[4872]: E0203 06:15:29.327416 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Feb 03 06:15:29 crc kubenswrapper[4872]: E0203 06:15:29.327987 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96fkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-p475q_openstack-operators(e7d3c449-bd59-48b2-9047-2c7589cdf51a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:29 crc kubenswrapper[4872]: E0203 06:15:29.331001 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" podUID="e7d3c449-bd59-48b2-9047-2c7589cdf51a" Feb 03 06:15:29 crc kubenswrapper[4872]: E0203 06:15:29.441323 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" podUID="c3aba523-0e11-4e5d-9adf-be5978a1f4e1" Feb 03 06:15:29 crc kubenswrapper[4872]: E0203 06:15:29.442006 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" podUID="e7d3c449-bd59-48b2-9047-2c7589cdf51a" Feb 03 06:15:30 crc kubenswrapper[4872]: E0203 06:15:30.936449 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Feb 03 06:15:30 crc kubenswrapper[4872]: E0203 06:15:30.937055 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hj47j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-t4l5f_openstack-operators(39f4fe96-ca54-4135-9eb3-e40a187e54a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:30 crc kubenswrapper[4872]: E0203 06:15:30.938323 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" podUID="39f4fe96-ca54-4135-9eb3-e40a187e54a4" Feb 03 06:15:31 crc kubenswrapper[4872]: I0203 06:15:31.271542 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:15:31 crc kubenswrapper[4872]: I0203 06:15:31.271611 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:15:31 crc kubenswrapper[4872]: E0203 06:15:31.401170 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 03 06:15:31 crc kubenswrapper[4872]: E0203 06:15:31.401309 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6t5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-ww2cx_openstack-operators(cfe508f3-98be-48d5-bf5b-3cb24a9ba131): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:31 crc kubenswrapper[4872]: E0203 06:15:31.402957 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" podUID="cfe508f3-98be-48d5-bf5b-3cb24a9ba131" Feb 03 06:15:31 crc kubenswrapper[4872]: E0203 06:15:31.453991 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" podUID="cfe508f3-98be-48d5-bf5b-3cb24a9ba131" Feb 03 06:15:31 crc kubenswrapper[4872]: E0203 06:15:31.454025 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" podUID="39f4fe96-ca54-4135-9eb3-e40a187e54a4" Feb 03 06:15:31 crc kubenswrapper[4872]: E0203 06:15:31.973220 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Feb 03 06:15:31 crc kubenswrapper[4872]: E0203 06:15:31.974438 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pplzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-9x5w8_openstack-operators(71308b40-7203-4586-9a21-9b4621a9aaf7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:31 crc kubenswrapper[4872]: E0203 06:15:31.976471 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" podUID="71308b40-7203-4586-9a21-9b4621a9aaf7" Feb 03 06:15:32 crc kubenswrapper[4872]: E0203 06:15:32.408486 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 03 06:15:32 crc kubenswrapper[4872]: E0203 06:15:32.408661 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v7p8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mvdpp_openstack-operators(71ed58d0-78f2-497b-8802-3647a361c99b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:15:32 crc kubenswrapper[4872]: E0203 06:15:32.410124 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" podUID="71ed58d0-78f2-497b-8802-3647a361c99b" Feb 03 06:15:32 crc kubenswrapper[4872]: E0203 06:15:32.459823 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" podUID="71308b40-7203-4586-9a21-9b4621a9aaf7" Feb 03 06:15:32 crc kubenswrapper[4872]: E0203 06:15:32.461799 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" podUID="71ed58d0-78f2-497b-8802-3647a361c99b" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.207579 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t"] Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.259580 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-9qph7"] Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.489036 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" event={"ID":"026cffca-2976-4ba1-8bb6-3e86c4521166","Type":"ContainerStarted","Data":"24791b59f3a4723b3922a64897a44e7cc2dfcde1048713c95cdb2fa01704c993"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.489187 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.490991 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" event={"ID":"febe7a4c-e275-4af0-b895-8701c164271c","Type":"ContainerStarted","Data":"e0fbff1c5dc714ce033fa4eb3b1ae07ecc6348aa76f3cfe00a451bcd3ec8f46d"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.491090 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.492801 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" event={"ID":"9ee72576-2dc3-4b0b-ba3d-38aa27fba615","Type":"ContainerStarted","Data":"398cda12f931a0bd73f69990136eb54bb4615550b95ebca4defde12da7e00448"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.492825 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" event={"ID":"9ee72576-2dc3-4b0b-ba3d-38aa27fba615","Type":"ContainerStarted","Data":"6d73dc9e259a969f033a5031c40acd84ad2622284fef1a485bb0bc8f491a52c9"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.493139 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.497334 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" event={"ID":"cd3e162d-6733-47c4-b507-c08c577723d0","Type":"ContainerStarted","Data":"ddcfde6d3ddf0ae092226775e8b14b0b61f579c083ffdf214af2ac2602182eec"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.498632 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" event={"ID":"7319691f-007c-45cd-bd1b-11055339e2ab","Type":"ContainerStarted","Data":"27873a372ea2b9d1f290a2009c5c7416792de8ed1c903439f77b7d2930c90a84"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.499212 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.500537 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" event={"ID":"584404c0-4ffd-43f5-a06f-009650dc0cc9","Type":"ContainerStarted","Data":"c8dc2e63e54029e4ad8dbcb5a21dd99101b76bc464af9713a3fbe5972ddd1e28"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.501021 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.502449 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" event={"ID":"876b6e4d-32cd-47e3-b748-f9c8ea1d84cf","Type":"ContainerStarted","Data":"148230b810dd1057a8edb3aed79335dd86be38e40288b67c7a8b9ecf8115364e"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.502786 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.504911 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" event={"ID":"29e7b8a5-19cf-46ea-a135-019d30af35b3","Type":"ContainerStarted","Data":"25b9732d796612adaa1308e6ec14e501a7921a822849cc85879d8d47f45d5a4a"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.505298 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.508725 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" event={"ID":"d843f756-0ec0-4a79-b34d-14e257e22102","Type":"ContainerStarted","Data":"2f18c8136242a68dc9a6f27ee40fdab47db36f6f79a500182adb544655f75ad9"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.509262 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.511861 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" event={"ID":"e0b27752-d9b8-4bd6-92c4-253508657db5","Type":"ContainerStarted","Data":"493180396cdc32bd0279d767585d000f653cb5c681049e53c4af574cefbdd5a0"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.512176 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.513584 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" event={"ID":"9bb5cb68-4c55-4c47-beb8-a9caa56db1b3","Type":"ContainerStarted","Data":"0405f0444d90793834d3926aac9091454d2f687da0fc51e0a06e747ba091d908"} Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.513989 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.569567 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" podStartSLOduration=4.080467514 podStartE2EDuration="30.569552388s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.300148819 +0000 UTC m=+878.882840233" lastFinishedPulling="2026-02-03 06:15:34.789233693 +0000 UTC m=+905.371925107" observedRunningTime="2026-02-03 06:15:35.52533876 +0000 UTC m=+906.108030174" watchObservedRunningTime="2026-02-03 06:15:35.569552388 +0000 UTC m=+906.152243802" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.570071 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" podStartSLOduration=5.4353623110000004 podStartE2EDuration="29.57006702s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.277092478 +0000 UTC m=+878.859783882" lastFinishedPulling="2026-02-03 06:15:32.411797177 +0000 UTC m=+902.994488591" observedRunningTime="2026-02-03 06:15:35.568939753 +0000 UTC m=+906.151631167" watchObservedRunningTime="2026-02-03 06:15:35.57006702 +0000 UTC m=+906.152758434" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.618799 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" podStartSLOduration=6.294559295 podStartE2EDuration="30.618780954s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:07.625642415 +0000 UTC m=+878.208333829" lastFinishedPulling="2026-02-03 06:15:31.949864084 +0000 UTC m=+902.532555488" observedRunningTime="2026-02-03 06:15:35.616865488 +0000 UTC m=+906.199556902" watchObservedRunningTime="2026-02-03 06:15:35.618780954 +0000 UTC m=+906.201472368" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.731057 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" podStartSLOduration=5.603692475 podStartE2EDuration="29.731042278s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.26543352 +0000 UTC m=+878.848124934" lastFinishedPulling="2026-02-03 06:15:32.392783323 +0000 UTC m=+902.975474737" observedRunningTime="2026-02-03 06:15:35.72071393 +0000 UTC m=+906.303405344" watchObservedRunningTime="2026-02-03 06:15:35.731042278 +0000 UTC m=+906.313733692" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.732202 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" podStartSLOduration=4.243902872 podStartE2EDuration="30.732195616s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.295202851 +0000 UTC m=+878.877894265" lastFinishedPulling="2026-02-03 06:15:34.783495595 +0000 UTC m=+905.366187009" observedRunningTime="2026-02-03 06:15:35.657359976 +0000 UTC m=+906.240051390" watchObservedRunningTime="2026-02-03 06:15:35.732195616 +0000 UTC m=+906.314887030" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.774783 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" podStartSLOduration=4.389065912 podStartE2EDuration="30.774769263s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:07.207636322 +0000 UTC m=+877.790327736" lastFinishedPulling="2026-02-03 06:15:33.593339673 +0000 UTC m=+904.176031087" observedRunningTime="2026-02-03 06:15:35.77215715 +0000 UTC m=+906.354848564" watchObservedRunningTime="2026-02-03 06:15:35.774769263 +0000 UTC m=+906.357460677" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.854363 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" podStartSLOduration=29.854348986 podStartE2EDuration="29.854348986s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:15:35.840053104 +0000 UTC m=+906.422744518" watchObservedRunningTime="2026-02-03 06:15:35.854348986 +0000 UTC m=+906.437040400" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.931933 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" podStartSLOduration=3.4361608869999998 podStartE2EDuration="29.93191421s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.293446199 +0000 UTC m=+878.876137613" lastFinishedPulling="2026-02-03 06:15:34.789199522 +0000 UTC m=+905.371890936" observedRunningTime="2026-02-03 06:15:35.916992474 +0000 UTC m=+906.499683908" watchObservedRunningTime="2026-02-03 06:15:35.93191421 +0000 UTC m=+906.514605624" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.932372 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" podStartSLOduration=3.517485271 podStartE2EDuration="29.932365621s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.368524713 +0000 UTC m=+878.951216127" lastFinishedPulling="2026-02-03 06:15:34.783405063 +0000 UTC m=+905.366096477" observedRunningTime="2026-02-03 06:15:35.875617214 +0000 UTC m=+906.458308628" watchObservedRunningTime="2026-02-03 06:15:35.932365621 +0000 UTC m=+906.515057035" Feb 03 06:15:35 crc kubenswrapper[4872]: I0203 06:15:35.970894 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" podStartSLOduration=5.872704999 podStartE2EDuration="30.970879751s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:07.294639282 +0000 UTC m=+877.877330696" lastFinishedPulling="2026-02-03 06:15:32.392814034 +0000 UTC m=+902.975505448" observedRunningTime="2026-02-03 06:15:35.965288408 +0000 UTC m=+906.547979822" watchObservedRunningTime="2026-02-03 06:15:35.970879751 +0000 UTC m=+906.553571155" Feb 03 06:15:36 crc kubenswrapper[4872]: I0203 06:15:36.531905 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" event={"ID":"8fc2acde-dcbe-4d32-ad0e-cd4627c2152b","Type":"ContainerStarted","Data":"fe3e72916b79ef5fbf4d9ddfce3a7dca13d3a7faab912933e38dbdce9082a956"} Feb 03 06:15:36 crc kubenswrapper[4872]: I0203 06:15:36.535563 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" Feb 03 06:15:36 crc kubenswrapper[4872]: I0203 06:15:36.552982 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" podStartSLOduration=3.523624662 podStartE2EDuration="31.552966097s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:07.645644133 +0000 UTC m=+878.228335547" lastFinishedPulling="2026-02-03 06:15:35.674985568 +0000 UTC m=+906.257676982" observedRunningTime="2026-02-03 06:15:36.549770081 +0000 UTC m=+907.132461495" watchObservedRunningTime="2026-02-03 06:15:36.552966097 +0000 UTC m=+907.135657511" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.298107 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mrbg7"] Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.299730 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.322027 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrbg7"] Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.499598 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-catalog-content\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.499824 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-utilities\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.499963 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7g27\" (UniqueName: \"kubernetes.io/projected/0614c415-b72c-4144-a715-033262112981-kube-api-access-p7g27\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.600790 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7g27\" (UniqueName: \"kubernetes.io/projected/0614c415-b72c-4144-a715-033262112981-kube-api-access-p7g27\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.601432 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-catalog-content\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.601906 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-catalog-content\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.602190 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-utilities\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.602572 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-utilities\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.620322 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7g27\" (UniqueName: \"kubernetes.io/projected/0614c415-b72c-4144-a715-033262112981-kube-api-access-p7g27\") pod \"redhat-operators-mrbg7\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:37 crc kubenswrapper[4872]: I0203 06:15:37.914094 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:38 crc kubenswrapper[4872]: I0203 06:15:38.515908 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:38 crc kubenswrapper[4872]: I0203 06:15:38.519201 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08ed93ce-02ea-45af-b481-69ed92f5aff5-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4\" (UID: \"08ed93ce-02ea-45af-b481-69ed92f5aff5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:38 crc kubenswrapper[4872]: I0203 06:15:38.726539 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ftwqn" Feb 03 06:15:38 crc kubenswrapper[4872]: I0203 06:15:38.735578 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.118929 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrbg7"] Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.298983 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4"] Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.566199 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" event={"ID":"71efcd75-c242-4036-b2e0-fdb117880dd9","Type":"ContainerStarted","Data":"388510edcc0d4a4c591203b0ba68b06679b9d6e42798f8d9c5c06bd67402c800"} Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.567171 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.569013 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" event={"ID":"08ed93ce-02ea-45af-b481-69ed92f5aff5","Type":"ContainerStarted","Data":"8c03e924d5e0109abbbe5ac85753c4b2df9b9d2b3db747a53652065cbdd97092"} Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.573041 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" event={"ID":"e7d3c449-bd59-48b2-9047-2c7589cdf51a","Type":"ContainerStarted","Data":"78d4d789f203f6dd78c2dfe6da1d927797925871aebceca4ec0959035a80ea86"} Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.574085 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.577011 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" event={"ID":"cd3e162d-6733-47c4-b507-c08c577723d0","Type":"ContainerStarted","Data":"187b6daf97efe1f1c64023b9b0d7c9c706953ac60905f31a79510feee7182818"} Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.577666 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.579679 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" event={"ID":"d9a67e95-335b-40cf-af71-4f3fd69a1fd9","Type":"ContainerStarted","Data":"16967a5c46bd95af95ee1bbe6b4bf38dd4ec80e0570d406e94344ab931002592"} Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.580174 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.585101 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" event={"ID":"394038df-4d8a-41cc-bb90-02dec7dd1fb3","Type":"ContainerStarted","Data":"820b6747eb2e49fafadf7dba6e02fbc7ec5deb4c4f0ff72ce3439c50b9bfa8a9"} Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.585522 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.585749 4872 generic.go:334] "Generic (PLEG): container finished" podID="0614c415-b72c-4144-a715-033262112981" containerID="6a1d9da37e7dca0740ebc3cbcb8d17b5fd559a950adf91849ebe5c791de67dce" exitCode=0 Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.585791 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbg7" event={"ID":"0614c415-b72c-4144-a715-033262112981","Type":"ContainerDied","Data":"6a1d9da37e7dca0740ebc3cbcb8d17b5fd559a950adf91849ebe5c791de67dce"} Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.585816 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbg7" event={"ID":"0614c415-b72c-4144-a715-033262112981","Type":"ContainerStarted","Data":"d62b5510a34e8f36d06cec7d93923c13541f69891b2f5897cc801cff5db750e1"} Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.602554 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" podStartSLOduration=3.357937839 podStartE2EDuration="36.602538813s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.038646838 +0000 UTC m=+878.621338243" lastFinishedPulling="2026-02-03 06:15:41.283247773 +0000 UTC m=+911.865939217" observedRunningTime="2026-02-03 06:15:41.598339512 +0000 UTC m=+912.181030936" watchObservedRunningTime="2026-02-03 06:15:41.602538813 +0000 UTC m=+912.185230227" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.617967 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" podStartSLOduration=3.182396052 podStartE2EDuration="36.617949794s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:07.798523338 +0000 UTC m=+878.381214752" lastFinishedPulling="2026-02-03 06:15:41.23407708 +0000 UTC m=+911.816768494" observedRunningTime="2026-02-03 06:15:41.61736789 +0000 UTC m=+912.200059304" watchObservedRunningTime="2026-02-03 06:15:41.617949794 +0000 UTC m=+912.200641208" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.635781 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" podStartSLOduration=30.638127831 podStartE2EDuration="36.635765133s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:35.267697001 +0000 UTC m=+905.850388415" lastFinishedPulling="2026-02-03 06:15:41.265334303 +0000 UTC m=+911.848025717" observedRunningTime="2026-02-03 06:15:41.631789507 +0000 UTC m=+912.214480921" watchObservedRunningTime="2026-02-03 06:15:41.635765133 +0000 UTC m=+912.218456547" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.653588 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" podStartSLOduration=2.655573605 podStartE2EDuration="35.653572291s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.265480121 +0000 UTC m=+878.848171535" lastFinishedPulling="2026-02-03 06:15:41.263478807 +0000 UTC m=+911.846170221" observedRunningTime="2026-02-03 06:15:41.648587451 +0000 UTC m=+912.231278865" watchObservedRunningTime="2026-02-03 06:15:41.653572291 +0000 UTC m=+912.236263705" Feb 03 06:15:41 crc kubenswrapper[4872]: I0203 06:15:41.669576 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" podStartSLOduration=2.670342799 podStartE2EDuration="35.669558726s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.265316366 +0000 UTC m=+878.848007780" lastFinishedPulling="2026-02-03 06:15:41.264532293 +0000 UTC m=+911.847223707" observedRunningTime="2026-02-03 06:15:41.663962371 +0000 UTC m=+912.246653785" watchObservedRunningTime="2026-02-03 06:15:41.669558726 +0000 UTC m=+912.252250140" Feb 03 06:15:42 crc kubenswrapper[4872]: I0203 06:15:42.593880 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbg7" event={"ID":"0614c415-b72c-4144-a715-033262112981","Type":"ContainerStarted","Data":"cae0174f81f4d4bed42c1f84c8a1f977e5a02581064153c8d787670d445aaea4"} Feb 03 06:15:43 crc kubenswrapper[4872]: I0203 06:15:43.082322 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5bc755b6c5-ptv7t" Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.607433 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" event={"ID":"c3aba523-0e11-4e5d-9adf-be5978a1f4e1","Type":"ContainerStarted","Data":"a7bc2a1dbac6c11b8f94c6577964d4ea4a153904a6fa573e2279f8a1d906c21c"} Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.607920 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.610785 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" event={"ID":"7102e0e7-3daa-4610-b931-ca17c7f08461","Type":"ContainerStarted","Data":"5d40022546def0bb8aeee3492d148ee453b9b5bfbefc7de840d15882d8419934"} Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.610972 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.613499 4872 generic.go:334] "Generic (PLEG): container finished" podID="0614c415-b72c-4144-a715-033262112981" containerID="cae0174f81f4d4bed42c1f84c8a1f977e5a02581064153c8d787670d445aaea4" exitCode=0 Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.613556 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbg7" event={"ID":"0614c415-b72c-4144-a715-033262112981","Type":"ContainerDied","Data":"cae0174f81f4d4bed42c1f84c8a1f977e5a02581064153c8d787670d445aaea4"} Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.615657 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" event={"ID":"08ed93ce-02ea-45af-b481-69ed92f5aff5","Type":"ContainerStarted","Data":"d091d34ec75c5153325b69cae763ffeaece0f86e4a56195ca13138eb1f3db504"} Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.618553 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" event={"ID":"71ed58d0-78f2-497b-8802-3647a361c99b","Type":"ContainerStarted","Data":"e88272d00e6dd14b291d781cd8ea99b8902f5d73427b4a4dfddef45e9d2f7efb"} Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.635486 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" podStartSLOduration=3.303766382 podStartE2EDuration="39.63547058s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:07.863226074 +0000 UTC m=+878.445917488" lastFinishedPulling="2026-02-03 06:15:44.194930272 +0000 UTC m=+914.777621686" observedRunningTime="2026-02-03 06:15:44.629541996 +0000 UTC m=+915.212233410" watchObservedRunningTime="2026-02-03 06:15:44.63547058 +0000 UTC m=+915.218161994" Feb 03 06:15:44 crc kubenswrapper[4872]: I0203 06:15:44.682199 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" podStartSLOduration=3.924161419 podStartE2EDuration="39.682183863s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:07.294328994 +0000 UTC m=+877.877020408" lastFinishedPulling="2026-02-03 06:15:43.052351438 +0000 UTC m=+913.635042852" observedRunningTime="2026-02-03 06:15:44.679110629 +0000 UTC m=+915.261802053" watchObservedRunningTime="2026-02-03 06:15:44.682183863 +0000 UTC m=+915.264875277" Feb 03 06:15:45 crc kubenswrapper[4872]: I0203 06:15:45.628794 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbg7" event={"ID":"0614c415-b72c-4144-a715-033262112981","Type":"ContainerStarted","Data":"d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3"} Feb 03 06:15:45 crc kubenswrapper[4872]: I0203 06:15:45.629190 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:45 crc kubenswrapper[4872]: I0203 06:15:45.649026 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mvdpp" podStartSLOduration=2.898270519 podStartE2EDuration="38.648999899s" podCreationTimestamp="2026-02-03 06:15:07 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.445208327 +0000 UTC m=+879.027899741" lastFinishedPulling="2026-02-03 06:15:44.195937717 +0000 UTC m=+914.778629121" observedRunningTime="2026-02-03 06:15:45.643135298 +0000 UTC m=+916.225826732" watchObservedRunningTime="2026-02-03 06:15:45.648999899 +0000 UTC m=+916.231691343" Feb 03 06:15:45 crc kubenswrapper[4872]: I0203 06:15:45.680251 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" podStartSLOduration=36.82624806 podStartE2EDuration="39.680235111s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="2026-02-03 06:15:41.340188713 +0000 UTC m=+911.922880127" lastFinishedPulling="2026-02-03 06:15:44.194175764 +0000 UTC m=+914.776867178" observedRunningTime="2026-02-03 06:15:45.674886332 +0000 UTC m=+916.257577756" watchObservedRunningTime="2026-02-03 06:15:45.680235111 +0000 UTC m=+916.262926535" Feb 03 06:15:45 crc kubenswrapper[4872]: I0203 06:15:45.716619 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mrbg7" podStartSLOduration=5.056632787 podStartE2EDuration="8.716603066s" podCreationTimestamp="2026-02-03 06:15:37 +0000 UTC" firstStartedPulling="2026-02-03 06:15:41.587250375 +0000 UTC m=+912.169941789" lastFinishedPulling="2026-02-03 06:15:45.247220634 +0000 UTC m=+915.829912068" observedRunningTime="2026-02-03 06:15:45.715164491 +0000 UTC m=+916.297855945" watchObservedRunningTime="2026-02-03 06:15:45.716603066 +0000 UTC m=+916.299294480" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.001974 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-bw7h4" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.010516 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-zznkj" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.093201 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-x2779" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.225028 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-n2mcv" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.448498 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-dpd6g" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.543480 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-lnvft" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.635583 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" event={"ID":"39f4fe96-ca54-4135-9eb3-e40a187e54a4","Type":"ContainerStarted","Data":"fb88324ecb6e50881ba1077f855cd7dd4843beb63f6afca4062fdc4107cbcf84"} Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.636081 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.652537 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-czpn5" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.676082 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" podStartSLOduration=3.262182483 podStartE2EDuration="40.676062815s" podCreationTimestamp="2026-02-03 06:15:06 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.277881176 +0000 UTC m=+878.860572580" lastFinishedPulling="2026-02-03 06:15:45.691761488 +0000 UTC m=+916.274452912" observedRunningTime="2026-02-03 06:15:46.650340077 +0000 UTC m=+917.233031491" watchObservedRunningTime="2026-02-03 06:15:46.676062815 +0000 UTC m=+917.258754229" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.713256 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-rjz4z" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.766505 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-p475q" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.875986 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lkxq4" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.945301 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w7ncd" Feb 03 06:15:46 crc kubenswrapper[4872]: I0203 06:15:46.952631 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-vl4tx" Feb 03 06:15:47 crc kubenswrapper[4872]: I0203 06:15:47.273261 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-29rp5" Feb 03 06:15:47 crc kubenswrapper[4872]: I0203 06:15:47.309388 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rlwfv" Feb 03 06:15:47 crc kubenswrapper[4872]: I0203 06:15:47.647766 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" event={"ID":"71308b40-7203-4586-9a21-9b4621a9aaf7","Type":"ContainerStarted","Data":"21b37cd58531415dc13704fd84998d6e05d63f0ddcf26e4a491afd6c480eb13e"} Feb 03 06:15:47 crc kubenswrapper[4872]: I0203 06:15:47.647926 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" Feb 03 06:15:47 crc kubenswrapper[4872]: I0203 06:15:47.672410 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" podStartSLOduration=3.795439068 podStartE2EDuration="42.672383781s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:07.933156436 +0000 UTC m=+878.515847840" lastFinishedPulling="2026-02-03 06:15:46.810101139 +0000 UTC m=+917.392792553" observedRunningTime="2026-02-03 06:15:47.668127339 +0000 UTC m=+918.250818753" watchObservedRunningTime="2026-02-03 06:15:47.672383781 +0000 UTC m=+918.255075235" Feb 03 06:15:47 crc kubenswrapper[4872]: I0203 06:15:47.914880 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:47 crc kubenswrapper[4872]: I0203 06:15:47.915243 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:15:48 crc kubenswrapper[4872]: I0203 06:15:48.655711 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" event={"ID":"cfe508f3-98be-48d5-bf5b-3cb24a9ba131","Type":"ContainerStarted","Data":"5e871ab591233151224b61d149bf3bf1d05805ff261254ae88c7a38143244230"} Feb 03 06:15:48 crc kubenswrapper[4872]: I0203 06:15:48.656005 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" Feb 03 06:15:48 crc kubenswrapper[4872]: I0203 06:15:48.681391 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" podStartSLOduration=4.069719531 podStartE2EDuration="43.681369731s" podCreationTimestamp="2026-02-03 06:15:05 +0000 UTC" firstStartedPulling="2026-02-03 06:15:08.028455234 +0000 UTC m=+878.611146648" lastFinishedPulling="2026-02-03 06:15:47.640105394 +0000 UTC m=+918.222796848" observedRunningTime="2026-02-03 06:15:48.675143772 +0000 UTC m=+919.257835196" watchObservedRunningTime="2026-02-03 06:15:48.681369731 +0000 UTC m=+919.264061145" Feb 03 06:15:48 crc kubenswrapper[4872]: I0203 06:15:48.977019 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrbg7" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" probeResult="failure" output=< Feb 03 06:15:48 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:15:48 crc kubenswrapper[4872]: > Feb 03 06:15:52 crc kubenswrapper[4872]: I0203 06:15:52.112336 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" Feb 03 06:15:56 crc kubenswrapper[4872]: I0203 06:15:56.066133 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-5chl5" Feb 03 06:15:56 crc kubenswrapper[4872]: I0203 06:15:56.470975 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-9x5w8" Feb 03 06:15:56 crc kubenswrapper[4872]: I0203 06:15:56.567591 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-dvqpz" Feb 03 06:15:56 crc kubenswrapper[4872]: I0203 06:15:56.613834 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-ww2cx" Feb 03 06:15:57 crc kubenswrapper[4872]: I0203 06:15:57.151428 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-t4l5f" Feb 03 06:15:58 crc kubenswrapper[4872]: I0203 06:15:58.745324 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4" Feb 03 06:15:58 crc kubenswrapper[4872]: I0203 06:15:58.966599 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrbg7" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" probeResult="failure" output=< Feb 03 06:15:58 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:15:58 crc kubenswrapper[4872]: > Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.271278 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.271676 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.271755 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.272455 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8daff91be8f2ae554bbad9124562735aa9d055c764b0bc522db54af803ed7992"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.272529 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://8daff91be8f2ae554bbad9124562735aa9d055c764b0bc522db54af803ed7992" gracePeriod=600 Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.744976 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="8daff91be8f2ae554bbad9124562735aa9d055c764b0bc522db54af803ed7992" exitCode=0 Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.745025 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"8daff91be8f2ae554bbad9124562735aa9d055c764b0bc522db54af803ed7992"} Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.745679 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"1ee0abf72dd022e7907c6192d8075ae69f194cd75ecc0b9f792ce2b1786381c8"} Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.745781 4872 scope.go:117] "RemoveContainer" containerID="e60bd766edb6c61ff425105b834c7b74cd5da1d6ff6ffe6f2e66cc4bbe2ff323" Feb 03 06:16:01 crc kubenswrapper[4872]: I0203 06:16:01.999591 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kp8jl"] Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.001208 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.015006 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kp8jl"] Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.055883 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-utilities\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.055960 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-catalog-content\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.056328 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64hn6\" (UniqueName: \"kubernetes.io/projected/42723145-4b78-415b-b51a-521027bb344c-kube-api-access-64hn6\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.157244 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-utilities\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.157520 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-catalog-content\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.157733 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64hn6\" (UniqueName: \"kubernetes.io/projected/42723145-4b78-415b-b51a-521027bb344c-kube-api-access-64hn6\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.157819 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-utilities\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.158227 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-catalog-content\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.179743 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64hn6\" (UniqueName: \"kubernetes.io/projected/42723145-4b78-415b-b51a-521027bb344c-kube-api-access-64hn6\") pod \"redhat-marketplace-kp8jl\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.325977 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:02 crc kubenswrapper[4872]: I0203 06:16:02.812652 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kp8jl"] Feb 03 06:16:02 crc kubenswrapper[4872]: W0203 06:16:02.819003 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42723145_4b78_415b_b51a_521027bb344c.slice/crio-a477c105317d6558dee985a68ba58044f662378a36475fe55afc39a74062f204 WatchSource:0}: Error finding container a477c105317d6558dee985a68ba58044f662378a36475fe55afc39a74062f204: Status 404 returned error can't find the container with id a477c105317d6558dee985a68ba58044f662378a36475fe55afc39a74062f204 Feb 03 06:16:03 crc kubenswrapper[4872]: I0203 06:16:03.760464 4872 generic.go:334] "Generic (PLEG): container finished" podID="42723145-4b78-415b-b51a-521027bb344c" containerID="b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6" exitCode=0 Feb 03 06:16:03 crc kubenswrapper[4872]: I0203 06:16:03.760756 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kp8jl" event={"ID":"42723145-4b78-415b-b51a-521027bb344c","Type":"ContainerDied","Data":"b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6"} Feb 03 06:16:03 crc kubenswrapper[4872]: I0203 06:16:03.760807 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kp8jl" event={"ID":"42723145-4b78-415b-b51a-521027bb344c","Type":"ContainerStarted","Data":"a477c105317d6558dee985a68ba58044f662378a36475fe55afc39a74062f204"} Feb 03 06:16:04 crc kubenswrapper[4872]: I0203 06:16:04.768259 4872 generic.go:334] "Generic (PLEG): container finished" podID="42723145-4b78-415b-b51a-521027bb344c" containerID="3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d" exitCode=0 Feb 03 06:16:04 crc kubenswrapper[4872]: I0203 06:16:04.768318 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kp8jl" event={"ID":"42723145-4b78-415b-b51a-521027bb344c","Type":"ContainerDied","Data":"3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d"} Feb 03 06:16:06 crc kubenswrapper[4872]: I0203 06:16:06.782751 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kp8jl" event={"ID":"42723145-4b78-415b-b51a-521027bb344c","Type":"ContainerStarted","Data":"a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0"} Feb 03 06:16:06 crc kubenswrapper[4872]: I0203 06:16:06.800227 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kp8jl" podStartSLOduration=3.617728391 podStartE2EDuration="5.80020736s" podCreationTimestamp="2026-02-03 06:16:01 +0000 UTC" firstStartedPulling="2026-02-03 06:16:03.76254226 +0000 UTC m=+934.345233684" lastFinishedPulling="2026-02-03 06:16:05.945021219 +0000 UTC m=+936.527712653" observedRunningTime="2026-02-03 06:16:06.796401318 +0000 UTC m=+937.379092732" watchObservedRunningTime="2026-02-03 06:16:06.80020736 +0000 UTC m=+937.382898784" Feb 03 06:16:08 crc kubenswrapper[4872]: I0203 06:16:08.960218 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrbg7" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" probeResult="failure" output=< Feb 03 06:16:08 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:16:08 crc kubenswrapper[4872]: > Feb 03 06:16:12 crc kubenswrapper[4872]: I0203 06:16:12.326364 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:12 crc kubenswrapper[4872]: I0203 06:16:12.326713 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:12 crc kubenswrapper[4872]: I0203 06:16:12.366075 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:12 crc kubenswrapper[4872]: I0203 06:16:12.895641 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:12 crc kubenswrapper[4872]: I0203 06:16:12.968028 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kp8jl"] Feb 03 06:16:14 crc kubenswrapper[4872]: I0203 06:16:14.850418 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kp8jl" podUID="42723145-4b78-415b-b51a-521027bb344c" containerName="registry-server" containerID="cri-o://a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0" gracePeriod=2 Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.338592 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.408034 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-utilities\") pod \"42723145-4b78-415b-b51a-521027bb344c\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.408081 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-catalog-content\") pod \"42723145-4b78-415b-b51a-521027bb344c\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.408101 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64hn6\" (UniqueName: \"kubernetes.io/projected/42723145-4b78-415b-b51a-521027bb344c-kube-api-access-64hn6\") pod \"42723145-4b78-415b-b51a-521027bb344c\" (UID: \"42723145-4b78-415b-b51a-521027bb344c\") " Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.408613 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-utilities" (OuterVolumeSpecName: "utilities") pod "42723145-4b78-415b-b51a-521027bb344c" (UID: "42723145-4b78-415b-b51a-521027bb344c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.416071 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42723145-4b78-415b-b51a-521027bb344c-kube-api-access-64hn6" (OuterVolumeSpecName: "kube-api-access-64hn6") pod "42723145-4b78-415b-b51a-521027bb344c" (UID: "42723145-4b78-415b-b51a-521027bb344c"). InnerVolumeSpecName "kube-api-access-64hn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.455811 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42723145-4b78-415b-b51a-521027bb344c" (UID: "42723145-4b78-415b-b51a-521027bb344c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.509459 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.509498 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42723145-4b78-415b-b51a-521027bb344c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.509538 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64hn6\" (UniqueName: \"kubernetes.io/projected/42723145-4b78-415b-b51a-521027bb344c-kube-api-access-64hn6\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.859882 4872 generic.go:334] "Generic (PLEG): container finished" podID="42723145-4b78-415b-b51a-521027bb344c" containerID="a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0" exitCode=0 Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.859929 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kp8jl" event={"ID":"42723145-4b78-415b-b51a-521027bb344c","Type":"ContainerDied","Data":"a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0"} Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.859960 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kp8jl" event={"ID":"42723145-4b78-415b-b51a-521027bb344c","Type":"ContainerDied","Data":"a477c105317d6558dee985a68ba58044f662378a36475fe55afc39a74062f204"} Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.859976 4872 scope.go:117] "RemoveContainer" containerID="a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.860097 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kp8jl" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.885350 4872 scope.go:117] "RemoveContainer" containerID="3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.908925 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kp8jl"] Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.912442 4872 scope.go:117] "RemoveContainer" containerID="b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.915495 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kp8jl"] Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.941099 4872 scope.go:117] "RemoveContainer" containerID="a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0" Feb 03 06:16:15 crc kubenswrapper[4872]: E0203 06:16:15.941533 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0\": container with ID starting with a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0 not found: ID does not exist" containerID="a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.941596 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0"} err="failed to get container status \"a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0\": rpc error: code = NotFound desc = could not find container \"a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0\": container with ID starting with a07e5d46f6007202dd5bd949f71fdbb7fdba8e4bcf3fd76ef4b08702007746c0 not found: ID does not exist" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.941626 4872 scope.go:117] "RemoveContainer" containerID="3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d" Feb 03 06:16:15 crc kubenswrapper[4872]: E0203 06:16:15.942182 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d\": container with ID starting with 3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d not found: ID does not exist" containerID="3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.942216 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d"} err="failed to get container status \"3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d\": rpc error: code = NotFound desc = could not find container \"3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d\": container with ID starting with 3f20d81f92150a30bea8fda9d8093a8195b51fe0ad630a925d579c849a1cb77d not found: ID does not exist" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.942237 4872 scope.go:117] "RemoveContainer" containerID="b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6" Feb 03 06:16:15 crc kubenswrapper[4872]: E0203 06:16:15.942651 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6\": container with ID starting with b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6 not found: ID does not exist" containerID="b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6" Feb 03 06:16:15 crc kubenswrapper[4872]: I0203 06:16:15.942675 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6"} err="failed to get container status \"b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6\": rpc error: code = NotFound desc = could not find container \"b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6\": container with ID starting with b18e52eec9f867753bf73ece81ffc9b32959df2d2d64e71217b51d1c089301e6 not found: ID does not exist" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.132226 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42723145-4b78-415b-b51a-521027bb344c" path="/var/lib/kubelet/pods/42723145-4b78-415b-b51a-521027bb344c/volumes" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.878757 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sz584"] Feb 03 06:16:16 crc kubenswrapper[4872]: E0203 06:16:16.879294 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42723145-4b78-415b-b51a-521027bb344c" containerName="extract-content" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.879306 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="42723145-4b78-415b-b51a-521027bb344c" containerName="extract-content" Feb 03 06:16:16 crc kubenswrapper[4872]: E0203 06:16:16.879318 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42723145-4b78-415b-b51a-521027bb344c" containerName="extract-utilities" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.879324 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="42723145-4b78-415b-b51a-521027bb344c" containerName="extract-utilities" Feb 03 06:16:16 crc kubenswrapper[4872]: E0203 06:16:16.879339 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42723145-4b78-415b-b51a-521027bb344c" containerName="registry-server" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.879347 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="42723145-4b78-415b-b51a-521027bb344c" containerName="registry-server" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.879492 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="42723145-4b78-415b-b51a-521027bb344c" containerName="registry-server" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.880220 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.886263 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.886795 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.887034 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.887038 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tnllx" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.900651 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sz584"] Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.964401 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f2tk"] Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.965486 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.969337 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 03 06:16:16 crc kubenswrapper[4872]: I0203 06:16:16.974727 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f2tk"] Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.028381 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b24d1500-f066-4bb7-8702-5e2e4075f49b-config\") pod \"dnsmasq-dns-675f4bcbfc-sz584\" (UID: \"b24d1500-f066-4bb7-8702-5e2e4075f49b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.028434 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scbz\" (UniqueName: \"kubernetes.io/projected/b24d1500-f066-4bb7-8702-5e2e4075f49b-kube-api-access-7scbz\") pod \"dnsmasq-dns-675f4bcbfc-sz584\" (UID: \"b24d1500-f066-4bb7-8702-5e2e4075f49b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.129479 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcq7z\" (UniqueName: \"kubernetes.io/projected/9b45218d-7c77-456f-91dc-0b9b9365d610-kube-api-access-pcq7z\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.129532 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-config\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.129559 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.129587 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b24d1500-f066-4bb7-8702-5e2e4075f49b-config\") pod \"dnsmasq-dns-675f4bcbfc-sz584\" (UID: \"b24d1500-f066-4bb7-8702-5e2e4075f49b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.129617 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7scbz\" (UniqueName: \"kubernetes.io/projected/b24d1500-f066-4bb7-8702-5e2e4075f49b-kube-api-access-7scbz\") pod \"dnsmasq-dns-675f4bcbfc-sz584\" (UID: \"b24d1500-f066-4bb7-8702-5e2e4075f49b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.130982 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b24d1500-f066-4bb7-8702-5e2e4075f49b-config\") pod \"dnsmasq-dns-675f4bcbfc-sz584\" (UID: \"b24d1500-f066-4bb7-8702-5e2e4075f49b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.168945 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scbz\" (UniqueName: \"kubernetes.io/projected/b24d1500-f066-4bb7-8702-5e2e4075f49b-kube-api-access-7scbz\") pod \"dnsmasq-dns-675f4bcbfc-sz584\" (UID: \"b24d1500-f066-4bb7-8702-5e2e4075f49b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.193968 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.230508 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcq7z\" (UniqueName: \"kubernetes.io/projected/9b45218d-7c77-456f-91dc-0b9b9365d610-kube-api-access-pcq7z\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.230559 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-config\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.230592 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.231747 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.233944 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-config\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.256029 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcq7z\" (UniqueName: \"kubernetes.io/projected/9b45218d-7c77-456f-91dc-0b9b9365d610-kube-api-access-pcq7z\") pod \"dnsmasq-dns-78dd6ddcc-2f2tk\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.279819 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.462421 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sz584"] Feb 03 06:16:17 crc kubenswrapper[4872]: W0203 06:16:17.465563 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb24d1500_f066_4bb7_8702_5e2e4075f49b.slice/crio-ae73d7f5bfde32641e3ef94ad32a5cffd9d2017ba01d48e660a7cdca43f07b79 WatchSource:0}: Error finding container ae73d7f5bfde32641e3ef94ad32a5cffd9d2017ba01d48e660a7cdca43f07b79: Status 404 returned error can't find the container with id ae73d7f5bfde32641e3ef94ad32a5cffd9d2017ba01d48e660a7cdca43f07b79 Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.467409 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.797020 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f2tk"] Feb 03 06:16:17 crc kubenswrapper[4872]: W0203 06:16:17.800844 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b45218d_7c77_456f_91dc_0b9b9365d610.slice/crio-50fd520633bb292f600d685f83a6c5b2f47749292043a2610ddde7f1acf90769 WatchSource:0}: Error finding container 50fd520633bb292f600d685f83a6c5b2f47749292043a2610ddde7f1acf90769: Status 404 returned error can't find the container with id 50fd520633bb292f600d685f83a6c5b2f47749292043a2610ddde7f1acf90769 Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.873306 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" event={"ID":"9b45218d-7c77-456f-91dc-0b9b9365d610","Type":"ContainerStarted","Data":"50fd520633bb292f600d685f83a6c5b2f47749292043a2610ddde7f1acf90769"} Feb 03 06:16:17 crc kubenswrapper[4872]: I0203 06:16:17.874556 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" event={"ID":"b24d1500-f066-4bb7-8702-5e2e4075f49b","Type":"ContainerStarted","Data":"ae73d7f5bfde32641e3ef94ad32a5cffd9d2017ba01d48e660a7cdca43f07b79"} Feb 03 06:16:18 crc kubenswrapper[4872]: I0203 06:16:18.960492 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrbg7" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" probeResult="failure" output=< Feb 03 06:16:18 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:16:18 crc kubenswrapper[4872]: > Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.720121 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sz584"] Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.736911 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c7vnb"] Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.737945 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.755426 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c7vnb"] Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.874990 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-config\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.875276 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7g5h\" (UniqueName: \"kubernetes.io/projected/e79e2395-321d-4648-96e9-7f4d595aa9ba-kube-api-access-b7g5h\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.875337 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.980389 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-config\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.980424 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7g5h\" (UniqueName: \"kubernetes.io/projected/e79e2395-321d-4648-96e9-7f4d595aa9ba-kube-api-access-b7g5h\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.980470 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.981535 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-config\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:19 crc kubenswrapper[4872]: I0203 06:16:19.981562 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.030702 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7g5h\" (UniqueName: \"kubernetes.io/projected/e79e2395-321d-4648-96e9-7f4d595aa9ba-kube-api-access-b7g5h\") pod \"dnsmasq-dns-666b6646f7-c7vnb\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.056378 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f2tk"] Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.072558 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.084877 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vv8sn"] Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.097759 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vv8sn"] Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.097869 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.186288 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrgj4\" (UniqueName: \"kubernetes.io/projected/e2cbb325-483a-4595-8a97-ca8370d79996-kube-api-access-nrgj4\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.189647 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.189756 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-config\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.293192 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.293250 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-config\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.293284 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrgj4\" (UniqueName: \"kubernetes.io/projected/e2cbb325-483a-4595-8a97-ca8370d79996-kube-api-access-nrgj4\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.306736 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.306854 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-config\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.323393 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrgj4\" (UniqueName: \"kubernetes.io/projected/e2cbb325-483a-4595-8a97-ca8370d79996-kube-api-access-nrgj4\") pod \"dnsmasq-dns-57d769cc4f-vv8sn\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.417919 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jt2w2"] Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.421414 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.425945 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt2w2"] Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.430252 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.505559 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7sgr\" (UniqueName: \"kubernetes.io/projected/d7c431d4-8bd4-456f-b697-4a62642afea1-kube-api-access-f7sgr\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.505601 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-utilities\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.505874 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-catalog-content\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.606891 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-catalog-content\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.607061 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7sgr\" (UniqueName: \"kubernetes.io/projected/d7c431d4-8bd4-456f-b697-4a62642afea1-kube-api-access-f7sgr\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.607080 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-utilities\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.607734 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-catalog-content\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.607771 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-utilities\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.625401 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7sgr\" (UniqueName: \"kubernetes.io/projected/d7c431d4-8bd4-456f-b697-4a62642afea1-kube-api-access-f7sgr\") pod \"certified-operators-jt2w2\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.764171 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.773174 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c7vnb"] Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.901171 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.902450 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.907060 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l8fdn" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.907144 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.907177 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.908628 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.908818 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.908964 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.909181 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.918346 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" event={"ID":"e79e2395-321d-4648-96e9-7f4d595aa9ba","Type":"ContainerStarted","Data":"e06a2b36a54526065566868bf4de7cc2511a8c009f2da98d8186f1a3ebe4ed37"} Feb 03 06:16:20 crc kubenswrapper[4872]: I0203 06:16:20.932044 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017120 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017152 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017185 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xhqb\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-kube-api-access-2xhqb\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017222 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017239 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017253 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017273 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017295 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017370 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017393 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.017424 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.026757 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vv8sn"] Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119464 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119507 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119545 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119569 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119584 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119609 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xhqb\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-kube-api-access-2xhqb\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119641 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119656 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119670 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119704 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.119729 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.120029 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.120511 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.121595 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.121912 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.122436 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.129752 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.130456 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.138623 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.141420 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.142342 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.172403 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xhqb\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-kube-api-access-2xhqb\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.196109 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.220147 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.287158 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.288380 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.290725 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.292392 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.295179 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.295219 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r85vx" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.295281 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.295370 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.295394 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.295424 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.432325 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.432610 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.432644 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.432678 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c1e000-2f81-4251-a9b5-28563d87bb93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.432710 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.432745 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.432761 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dbs\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-kube-api-access-z4dbs\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.432784 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c1e000-2f81-4251-a9b5-28563d87bb93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.437093 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.437137 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.437153 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.537986 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538047 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c1e000-2f81-4251-a9b5-28563d87bb93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538065 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538091 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538115 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dbs\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-kube-api-access-z4dbs\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538202 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c1e000-2f81-4251-a9b5-28563d87bb93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538236 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538266 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538284 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538304 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.538331 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.539310 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.539581 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.543001 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.546394 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.546623 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.547000 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.563538 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt2w2"] Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.567278 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c1e000-2f81-4251-a9b5-28563d87bb93-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.567460 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.572193 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c1e000-2f81-4251-a9b5-28563d87bb93-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.583584 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.583907 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.586249 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dbs\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-kube-api-access-z4dbs\") pod \"rabbitmq-cell1-server-0\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.627923 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.867487 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.970451 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659","Type":"ContainerStarted","Data":"3bf4cd0120b3bb1a3591777f7f00289702a5a09145eebcbcae6346bb1a1cef80"} Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.983839 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2w2" event={"ID":"d7c431d4-8bd4-456f-b697-4a62642afea1","Type":"ContainerStarted","Data":"21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175"} Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.983883 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2w2" event={"ID":"d7c431d4-8bd4-456f-b697-4a62642afea1","Type":"ContainerStarted","Data":"f61fc283b1df0bc4773442311e074343740386138a97be9b76aafe732296701b"} Feb 03 06:16:21 crc kubenswrapper[4872]: I0203 06:16:21.990462 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" event={"ID":"e2cbb325-483a-4595-8a97-ca8370d79996","Type":"ContainerStarted","Data":"5f409ec92206c18b728b6412dba2be18e0d8acb9cd6d92ac180064d508e4f449"} Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.180626 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:16:22 crc kubenswrapper[4872]: W0203 06:16:22.220820 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39c1e000_2f81_4251_a9b5_28563d87bb93.slice/crio-13467ab91326a20c0fd1da8d44b996d1e52ab40495e71f4d43a47532486544a2 WatchSource:0}: Error finding container 13467ab91326a20c0fd1da8d44b996d1e52ab40495e71f4d43a47532486544a2: Status 404 returned error can't find the container with id 13467ab91326a20c0fd1da8d44b996d1e52ab40495e71f4d43a47532486544a2 Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.701908 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.708378 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.716397 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.716427 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.716548 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.716738 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2xvsj" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.720265 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.738522 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.761606 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.761661 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.761767 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.761805 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tzh\" (UniqueName: \"kubernetes.io/projected/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-kube-api-access-26tzh\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.761873 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.761923 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.761964 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.761985 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.863566 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.863621 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.863663 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.863702 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26tzh\" (UniqueName: \"kubernetes.io/projected/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-kube-api-access-26tzh\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.863750 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.863798 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.863855 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.863877 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.864709 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-kolla-config\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.865013 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.865029 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-config-data-default\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.865057 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.866423 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.877678 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.880870 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tzh\" (UniqueName: \"kubernetes.io/projected/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-kube-api-access-26tzh\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.883797 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e939fc-8c23-4843-a7ec-4cbd82d8cff7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:22 crc kubenswrapper[4872]: I0203 06:16:22.885725 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"57e939fc-8c23-4843-a7ec-4cbd82d8cff7\") " pod="openstack/openstack-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.002374 4872 generic.go:334] "Generic (PLEG): container finished" podID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerID="21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175" exitCode=0 Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.002415 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2w2" event={"ID":"d7c431d4-8bd4-456f-b697-4a62642afea1","Type":"ContainerDied","Data":"21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175"} Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.007308 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39c1e000-2f81-4251-a9b5-28563d87bb93","Type":"ContainerStarted","Data":"13467ab91326a20c0fd1da8d44b996d1e52ab40495e71f4d43a47532486544a2"} Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.048337 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.510551 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.512025 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.515370 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.515767 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.515824 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.515906 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rnptf" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.538083 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.577765 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a46bed9-4154-4a62-8805-fe67c55a2d89-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.577834 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.577931 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.577972 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.578016 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a46bed9-4154-4a62-8805-fe67c55a2d89-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.578053 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98xs4\" (UniqueName: \"kubernetes.io/projected/5a46bed9-4154-4a62-8805-fe67c55a2d89-kube-api-access-98xs4\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.578098 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.578133 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a46bed9-4154-4a62-8805-fe67c55a2d89-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.605789 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.679515 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a46bed9-4154-4a62-8805-fe67c55a2d89-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.679869 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98xs4\" (UniqueName: \"kubernetes.io/projected/5a46bed9-4154-4a62-8805-fe67c55a2d89-kube-api-access-98xs4\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.679926 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.679944 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a46bed9-4154-4a62-8805-fe67c55a2d89-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.679973 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a46bed9-4154-4a62-8805-fe67c55a2d89-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.680254 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a46bed9-4154-4a62-8805-fe67c55a2d89-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.680319 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.681085 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.681123 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.681340 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.683392 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.683738 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.688622 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a46bed9-4154-4a62-8805-fe67c55a2d89-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.689941 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a46bed9-4154-4a62-8805-fe67c55a2d89-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.700304 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a46bed9-4154-4a62-8805-fe67c55a2d89-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.721196 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98xs4\" (UniqueName: \"kubernetes.io/projected/5a46bed9-4154-4a62-8805-fe67c55a2d89-kube-api-access-98xs4\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.738279 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5a46bed9-4154-4a62-8805-fe67c55a2d89\") " pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.863017 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.925539 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.926777 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.930114 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.930414 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.930573 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-js6pr" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.942384 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.994426 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd2a199-4a3b-4e36-8430-5301d68c1595-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.994490 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxzp\" (UniqueName: \"kubernetes.io/projected/ecd2a199-4a3b-4e36-8430-5301d68c1595-kube-api-access-nvxzp\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.994508 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecd2a199-4a3b-4e36-8430-5301d68c1595-kolla-config\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.994663 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecd2a199-4a3b-4e36-8430-5301d68c1595-config-data\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:23 crc kubenswrapper[4872]: I0203 06:16:23.994815 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd2a199-4a3b-4e36-8430-5301d68c1595-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.051777 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e939fc-8c23-4843-a7ec-4cbd82d8cff7","Type":"ContainerStarted","Data":"036f1af77ee6c4423c800834ff68fee558bd7471205e7662724ce64b48133f03"} Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.098549 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecd2a199-4a3b-4e36-8430-5301d68c1595-kolla-config\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.098607 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecd2a199-4a3b-4e36-8430-5301d68c1595-config-data\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.098628 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd2a199-4a3b-4e36-8430-5301d68c1595-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.098723 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd2a199-4a3b-4e36-8430-5301d68c1595-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.098765 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvxzp\" (UniqueName: \"kubernetes.io/projected/ecd2a199-4a3b-4e36-8430-5301d68c1595-kube-api-access-nvxzp\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.099370 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecd2a199-4a3b-4e36-8430-5301d68c1595-kolla-config\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.100102 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecd2a199-4a3b-4e36-8430-5301d68c1595-config-data\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.108246 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd2a199-4a3b-4e36-8430-5301d68c1595-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.108741 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd2a199-4a3b-4e36-8430-5301d68c1595-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.165293 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvxzp\" (UniqueName: \"kubernetes.io/projected/ecd2a199-4a3b-4e36-8430-5301d68c1595-kube-api-access-nvxzp\") pod \"memcached-0\" (UID: \"ecd2a199-4a3b-4e36-8430-5301d68c1595\") " pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.273447 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 06:16:24 crc kubenswrapper[4872]: I0203 06:16:24.704702 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.128162 4872 generic.go:334] "Generic (PLEG): container finished" podID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerID="f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf" exitCode=0 Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.128619 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2w2" event={"ID":"d7c431d4-8bd4-456f-b697-4a62642afea1","Type":"ContainerDied","Data":"f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf"} Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.134996 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a46bed9-4154-4a62-8805-fe67c55a2d89","Type":"ContainerStarted","Data":"f8e1ff63e5912f360a814820e86e7093b197efd995680302608f0b621f106709"} Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.218779 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.693206 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.694594 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.702464 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sqfwp" Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.739189 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.749482 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sz2n\" (UniqueName: \"kubernetes.io/projected/f2a15446-559d-442b-859c-783ab8e7a828-kube-api-access-6sz2n\") pod \"kube-state-metrics-0\" (UID: \"f2a15446-559d-442b-859c-783ab8e7a828\") " pod="openstack/kube-state-metrics-0" Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.850586 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sz2n\" (UniqueName: \"kubernetes.io/projected/f2a15446-559d-442b-859c-783ab8e7a828-kube-api-access-6sz2n\") pod \"kube-state-metrics-0\" (UID: \"f2a15446-559d-442b-859c-783ab8e7a828\") " pod="openstack/kube-state-metrics-0" Feb 03 06:16:25 crc kubenswrapper[4872]: I0203 06:16:25.921259 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sz2n\" (UniqueName: \"kubernetes.io/projected/f2a15446-559d-442b-859c-783ab8e7a828-kube-api-access-6sz2n\") pod \"kube-state-metrics-0\" (UID: \"f2a15446-559d-442b-859c-783ab8e7a828\") " pod="openstack/kube-state-metrics-0" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.021592 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.218206 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ecd2a199-4a3b-4e36-8430-5301d68c1595","Type":"ContainerStarted","Data":"d951ea6ec4584110c99ac6b222dfef2deaa1edb3b2ca9835b398e46d0ef9f613"} Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.218248 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2lv6c"] Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.220002 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2lv6c"] Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.220200 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.259615 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-utilities\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.261257 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-catalog-content\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.265746 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jm2b\" (UniqueName: \"kubernetes.io/projected/20d583df-d342-40a6-ba1e-2ad93c1f0069-kube-api-access-2jm2b\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.367940 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-utilities\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.368014 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-catalog-content\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.368108 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jm2b\" (UniqueName: \"kubernetes.io/projected/20d583df-d342-40a6-ba1e-2ad93c1f0069-kube-api-access-2jm2b\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.369240 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-catalog-content\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.370659 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-utilities\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.384475 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jm2b\" (UniqueName: \"kubernetes.io/projected/20d583df-d342-40a6-ba1e-2ad93c1f0069-kube-api-access-2jm2b\") pod \"community-operators-2lv6c\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:26 crc kubenswrapper[4872]: I0203 06:16:26.559469 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:16:28 crc kubenswrapper[4872]: I0203 06:16:28.089846 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:16:28 crc kubenswrapper[4872]: I0203 06:16:28.190423 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.084376 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.085942 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.095492 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.096708 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-flc7m" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.098576 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.098625 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.098771 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.101178 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.126762 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wjbc7"] Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.128749 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.154184 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.154282 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.154795 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zw4gp" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.160363 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wjbc7"] Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.246951 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zbxs4"] Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.248764 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254028 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-log-ovn\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254143 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkp6q\" (UniqueName: \"kubernetes.io/projected/82f89f1b-12ce-4720-9af7-3d8acb128b65-kube-api-access-lkp6q\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254224 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254296 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7qv\" (UniqueName: \"kubernetes.io/projected/19908dab-b232-4cd8-b45b-079cebdee593-kube-api-access-pw7qv\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254392 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82f89f1b-12ce-4720-9af7-3d8acb128b65-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254465 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82f89f1b-12ce-4720-9af7-3d8acb128b65-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254545 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254626 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254727 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f89f1b-12ce-4720-9af7-3d8acb128b65-config\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254800 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19908dab-b232-4cd8-b45b-079cebdee593-ovn-controller-tls-certs\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254882 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-run-ovn\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.254955 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-run\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.255024 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19908dab-b232-4cd8-b45b-079cebdee593-combined-ca-bundle\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.255094 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19908dab-b232-4cd8-b45b-079cebdee593-scripts\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.255168 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.262734 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zbxs4"] Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.361228 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-scripts\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.361589 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.361750 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-etc-ovs\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.361953 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f89f1b-12ce-4720-9af7-3d8acb128b65-config\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.362879 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-run\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.362904 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19908dab-b232-4cd8-b45b-079cebdee593-combined-ca-bundle\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.362936 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkp6q\" (UniqueName: \"kubernetes.io/projected/82f89f1b-12ce-4720-9af7-3d8acb128b65-kube-api-access-lkp6q\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.362954 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7qv\" (UniqueName: \"kubernetes.io/projected/19908dab-b232-4cd8-b45b-079cebdee593-kube-api-access-pw7qv\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.362970 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-log\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.362988 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-lib\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363011 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82f89f1b-12ce-4720-9af7-3d8acb128b65-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363032 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363049 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-run\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363079 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19908dab-b232-4cd8-b45b-079cebdee593-ovn-controller-tls-certs\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363102 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-run-ovn\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363121 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19908dab-b232-4cd8-b45b-079cebdee593-scripts\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363144 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363166 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-log-ovn\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363188 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363216 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8l7m\" (UniqueName: \"kubernetes.io/projected/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-kube-api-access-h8l7m\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363246 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82f89f1b-12ce-4720-9af7-3d8acb128b65-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363826 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-run-ovn\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363866 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-log-ovn\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.363971 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82f89f1b-12ce-4720-9af7-3d8acb128b65-config\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.365565 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19908dab-b232-4cd8-b45b-079cebdee593-scripts\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.366774 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82f89f1b-12ce-4720-9af7-3d8acb128b65-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.366856 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.366931 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19908dab-b232-4cd8-b45b-079cebdee593-var-run\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.369927 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82f89f1b-12ce-4720-9af7-3d8acb128b65-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.369956 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.372249 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.382492 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19908dab-b232-4cd8-b45b-079cebdee593-combined-ca-bundle\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.387133 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82f89f1b-12ce-4720-9af7-3d8acb128b65-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.387581 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19908dab-b232-4cd8-b45b-079cebdee593-ovn-controller-tls-certs\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.389165 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7qv\" (UniqueName: \"kubernetes.io/projected/19908dab-b232-4cd8-b45b-079cebdee593-kube-api-access-pw7qv\") pod \"ovn-controller-wjbc7\" (UID: \"19908dab-b232-4cd8-b45b-079cebdee593\") " pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.398181 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.399854 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkp6q\" (UniqueName: \"kubernetes.io/projected/82f89f1b-12ce-4720-9af7-3d8acb128b65-kube-api-access-lkp6q\") pod \"ovsdbserver-nb-0\" (UID: \"82f89f1b-12ce-4720-9af7-3d8acb128b65\") " pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.413273 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrbg7"] Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.413550 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mrbg7" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" containerID="cri-o://d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3" gracePeriod=2 Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.436634 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.471793 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8l7m\" (UniqueName: \"kubernetes.io/projected/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-kube-api-access-h8l7m\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.471846 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-scripts\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.471869 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-etc-ovs\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.471915 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-log\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.471934 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-lib\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.471980 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-run\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.472211 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-run\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.472604 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-log\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.472649 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-var-lib\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.472725 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-etc-ovs\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.474882 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-scripts\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.484339 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.496889 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8l7m\" (UniqueName: \"kubernetes.io/projected/bfcd6876-7bc4-40d4-94af-6a5c175e7bb0-kube-api-access-h8l7m\") pod \"ovn-controller-ovs-zbxs4\" (UID: \"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0\") " pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:30 crc kubenswrapper[4872]: I0203 06:16:30.576369 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:16:31 crc kubenswrapper[4872]: I0203 06:16:31.270845 4872 generic.go:334] "Generic (PLEG): container finished" podID="0614c415-b72c-4144-a715-033262112981" containerID="d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3" exitCode=0 Feb 03 06:16:31 crc kubenswrapper[4872]: I0203 06:16:31.270906 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbg7" event={"ID":"0614c415-b72c-4144-a715-033262112981","Type":"ContainerDied","Data":"d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3"} Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.569478 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.570822 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.572591 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.572934 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.573050 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-j5fxm" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.573433 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.588284 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.713273 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f323a5b2-6517-4f06-baec-308207807af3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.713320 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.713369 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.713411 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f323a5b2-6517-4f06-baec-308207807af3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.713457 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f323a5b2-6517-4f06-baec-308207807af3-config\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.713494 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.713559 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.713637 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6h6\" (UniqueName: \"kubernetes.io/projected/f323a5b2-6517-4f06-baec-308207807af3-kube-api-access-ms6h6\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.814844 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f323a5b2-6517-4f06-baec-308207807af3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.814902 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.814950 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.814994 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f323a5b2-6517-4f06-baec-308207807af3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.815043 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f323a5b2-6517-4f06-baec-308207807af3-config\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.815081 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.815120 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.815149 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6h6\" (UniqueName: \"kubernetes.io/projected/f323a5b2-6517-4f06-baec-308207807af3-kube-api-access-ms6h6\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.815896 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f323a5b2-6517-4f06-baec-308207807af3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.816940 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.818185 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f323a5b2-6517-4f06-baec-308207807af3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.819070 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f323a5b2-6517-4f06-baec-308207807af3-config\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.824472 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.825161 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.830035 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f323a5b2-6517-4f06-baec-308207807af3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.837637 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.842492 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6h6\" (UniqueName: \"kubernetes.io/projected/f323a5b2-6517-4f06-baec-308207807af3-kube-api-access-ms6h6\") pod \"ovsdbserver-sb-0\" (UID: \"f323a5b2-6517-4f06-baec-308207807af3\") " pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:32 crc kubenswrapper[4872]: I0203 06:16:32.902382 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 06:16:37 crc kubenswrapper[4872]: E0203 06:16:37.915578 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3 is running failed: container process not found" containerID="d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 06:16:37 crc kubenswrapper[4872]: E0203 06:16:37.916839 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3 is running failed: container process not found" containerID="d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 06:16:37 crc kubenswrapper[4872]: E0203 06:16:37.917374 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3 is running failed: container process not found" containerID="d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 06:16:37 crc kubenswrapper[4872]: E0203 06:16:37.917455 4872 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-mrbg7" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.216191 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.216841 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4dbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(39c1e000-2f81-4251-a9b5-28563d87bb93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.218788 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.299012 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.345066 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.345240 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xhqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(b3e0a9c0-be7d-41fe-b216-aa18c0d2d659): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.347706 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.376383 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-utilities\") pod \"0614c415-b72c-4144-a715-033262112981\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.376483 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-catalog-content\") pod \"0614c415-b72c-4144-a715-033262112981\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.376660 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7g27\" (UniqueName: \"kubernetes.io/projected/0614c415-b72c-4144-a715-033262112981-kube-api-access-p7g27\") pod \"0614c415-b72c-4144-a715-033262112981\" (UID: \"0614c415-b72c-4144-a715-033262112981\") " Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.377961 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-utilities" (OuterVolumeSpecName: "utilities") pod "0614c415-b72c-4144-a715-033262112981" (UID: "0614c415-b72c-4144-a715-033262112981"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.396323 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrbg7" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.396734 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbg7" event={"ID":"0614c415-b72c-4144-a715-033262112981","Type":"ContainerDied","Data":"d62b5510a34e8f36d06cec7d93923c13541f69891b2f5897cc801cff5db750e1"} Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.396765 4872 scope.go:117] "RemoveContainer" containerID="d548483fcff572043699f7231335cfb685af84a6a7f9cc6eff8c34d25cb94eb3" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.400375 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.400505 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.409899 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0614c415-b72c-4144-a715-033262112981-kube-api-access-p7g27" (OuterVolumeSpecName: "kube-api-access-p7g27") pod "0614c415-b72c-4144-a715-033262112981" (UID: "0614c415-b72c-4144-a715-033262112981"). InnerVolumeSpecName "kube-api-access-p7g27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.474058 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.474235 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98xs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(5a46bed9-4154-4a62-8805-fe67c55a2d89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:16:46 crc kubenswrapper[4872]: E0203 06:16:46.475491 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="5a46bed9-4154-4a62-8805-fe67c55a2d89" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.480535 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.481010 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7g27\" (UniqueName: \"kubernetes.io/projected/0614c415-b72c-4144-a715-033262112981-kube-api-access-p7g27\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.480624 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0614c415-b72c-4144-a715-033262112981" (UID: "0614c415-b72c-4144-a715-033262112981"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.582858 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0614c415-b72c-4144-a715-033262112981-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.737100 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrbg7"] Feb 03 06:16:46 crc kubenswrapper[4872]: I0203 06:16:46.741775 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mrbg7"] Feb 03 06:16:48 crc kubenswrapper[4872]: I0203 06:16:48.133461 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0614c415-b72c-4144-a715-033262112981" path="/var/lib/kubelet/pods/0614c415-b72c-4144-a715-033262112981/volumes" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.843265 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.843864 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7scbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-sz584_openstack(b24d1500-f066-4bb7-8702-5e2e4075f49b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.845032 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" podUID="b24d1500-f066-4bb7-8702-5e2e4075f49b" Feb 03 06:16:53 crc kubenswrapper[4872]: I0203 06:16:53.900094 4872 scope.go:117] "RemoveContainer" containerID="cae0174f81f4d4bed42c1f84c8a1f977e5a02581064153c8d787670d445aaea4" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.969542 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.969988 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pcq7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2f2tk_openstack(9b45218d-7c77-456f-91dc-0b9b9365d610): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.969567 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.970084 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7g5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-c7vnb_openstack(e79e2395-321d-4648-96e9-7f4d595aa9ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.971468 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" podUID="e79e2395-321d-4648-96e9-7f4d595aa9ba" Feb 03 06:16:53 crc kubenswrapper[4872]: E0203 06:16:53.971522 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" podUID="9b45218d-7c77-456f-91dc-0b9b9365d610" Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.018325 4872 scope.go:117] "RemoveContainer" containerID="6a1d9da37e7dca0740ebc3cbcb8d17b5fd559a950adf91849ebe5c791de67dce" Feb 03 06:16:54 crc kubenswrapper[4872]: E0203 06:16:54.023646 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 06:16:54 crc kubenswrapper[4872]: E0203 06:16:54.023874 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrgj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-vv8sn_openstack(e2cbb325-483a-4595-8a97-ca8370d79996): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:16:54 crc kubenswrapper[4872]: E0203 06:16:54.028025 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.471734 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wjbc7"] Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.477333 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a46bed9-4154-4a62-8805-fe67c55a2d89","Type":"ContainerStarted","Data":"379f79ff0c233c903f8088423c83ac8670c16474a4dd137d96ddd7331ff42aed"} Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.490801 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2w2" event={"ID":"d7c431d4-8bd4-456f-b697-4a62642afea1","Type":"ContainerStarted","Data":"9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6"} Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.497763 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ecd2a199-4a3b-4e36-8430-5301d68c1595","Type":"ContainerStarted","Data":"fffa0aeb2d99a5314b4cafa247e0c888fc9541e26cc67d7d9bebe9b04f9399d8"} Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.498698 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.511402 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e939fc-8c23-4843-a7ec-4cbd82d8cff7","Type":"ContainerStarted","Data":"c5d64e17ca82d9e9e42dd2fecd14247b9731047f5475e5082aa65d8cbbf5177c"} Feb 03 06:16:54 crc kubenswrapper[4872]: E0203 06:16:54.513058 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" Feb 03 06:16:54 crc kubenswrapper[4872]: E0203 06:16:54.517128 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" podUID="e79e2395-321d-4648-96e9-7f4d595aa9ba" Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.532372 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.701599945 podStartE2EDuration="31.532355199s" podCreationTimestamp="2026-02-03 06:16:23 +0000 UTC" firstStartedPulling="2026-02-03 06:16:25.203384269 +0000 UTC m=+955.786075683" lastFinishedPulling="2026-02-03 06:16:54.034139523 +0000 UTC m=+984.616830937" observedRunningTime="2026-02-03 06:16:54.524336596 +0000 UTC m=+985.107028010" watchObservedRunningTime="2026-02-03 06:16:54.532355199 +0000 UTC m=+985.115046613" Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.560178 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2lv6c"] Feb 03 06:16:54 crc kubenswrapper[4872]: I0203 06:16:54.577786 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.111414 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.122948 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-config\") pod \"9b45218d-7c77-456f-91dc-0b9b9365d610\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.122997 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcq7z\" (UniqueName: \"kubernetes.io/projected/9b45218d-7c77-456f-91dc-0b9b9365d610-kube-api-access-pcq7z\") pod \"9b45218d-7c77-456f-91dc-0b9b9365d610\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.123028 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-dns-svc\") pod \"9b45218d-7c77-456f-91dc-0b9b9365d610\" (UID: \"9b45218d-7c77-456f-91dc-0b9b9365d610\") " Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.123821 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b45218d-7c77-456f-91dc-0b9b9365d610" (UID: "9b45218d-7c77-456f-91dc-0b9b9365d610"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.127116 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.127613 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-config" (OuterVolumeSpecName: "config") pod "9b45218d-7c77-456f-91dc-0b9b9365d610" (UID: "9b45218d-7c77-456f-91dc-0b9b9365d610"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.150177 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b45218d-7c77-456f-91dc-0b9b9365d610-kube-api-access-pcq7z" (OuterVolumeSpecName: "kube-api-access-pcq7z") pod "9b45218d-7c77-456f-91dc-0b9b9365d610" (UID: "9b45218d-7c77-456f-91dc-0b9b9365d610"). InnerVolumeSpecName "kube-api-access-pcq7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.224594 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7scbz\" (UniqueName: \"kubernetes.io/projected/b24d1500-f066-4bb7-8702-5e2e4075f49b-kube-api-access-7scbz\") pod \"b24d1500-f066-4bb7-8702-5e2e4075f49b\" (UID: \"b24d1500-f066-4bb7-8702-5e2e4075f49b\") " Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.224642 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b24d1500-f066-4bb7-8702-5e2e4075f49b-config\") pod \"b24d1500-f066-4bb7-8702-5e2e4075f49b\" (UID: \"b24d1500-f066-4bb7-8702-5e2e4075f49b\") " Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.225677 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24d1500-f066-4bb7-8702-5e2e4075f49b-config" (OuterVolumeSpecName: "config") pod "b24d1500-f066-4bb7-8702-5e2e4075f49b" (UID: "b24d1500-f066-4bb7-8702-5e2e4075f49b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.226211 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.226248 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcq7z\" (UniqueName: \"kubernetes.io/projected/9b45218d-7c77-456f-91dc-0b9b9365d610-kube-api-access-pcq7z\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.226308 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b45218d-7c77-456f-91dc-0b9b9365d610-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.226325 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b24d1500-f066-4bb7-8702-5e2e4075f49b-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.235583 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24d1500-f066-4bb7-8702-5e2e4075f49b-kube-api-access-7scbz" (OuterVolumeSpecName: "kube-api-access-7scbz") pod "b24d1500-f066-4bb7-8702-5e2e4075f49b" (UID: "b24d1500-f066-4bb7-8702-5e2e4075f49b"). InnerVolumeSpecName "kube-api-access-7scbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.326938 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7scbz\" (UniqueName: \"kubernetes.io/projected/b24d1500-f066-4bb7-8702-5e2e4075f49b-kube-api-access-7scbz\") on node \"crc\" DevicePath \"\"" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.478901 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 06:16:55 crc kubenswrapper[4872]: W0203 06:16:55.493021 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf323a5b2_6517_4f06_baec_308207807af3.slice/crio-9ef78dfd10a3ffd0feafc692beb6135aa66f52bacd7f5377659f02425ee82a62 WatchSource:0}: Error finding container 9ef78dfd10a3ffd0feafc692beb6135aa66f52bacd7f5377659f02425ee82a62: Status 404 returned error can't find the container with id 9ef78dfd10a3ffd0feafc692beb6135aa66f52bacd7f5377659f02425ee82a62 Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.528062 4872 generic.go:334] "Generic (PLEG): container finished" podID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerID="43696870ce8028b2c886327133c44f16188abd4e0edc0539a04654f6ab77536a" exitCode=0 Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.528141 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lv6c" event={"ID":"20d583df-d342-40a6-ba1e-2ad93c1f0069","Type":"ContainerDied","Data":"43696870ce8028b2c886327133c44f16188abd4e0edc0539a04654f6ab77536a"} Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.528165 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lv6c" event={"ID":"20d583df-d342-40a6-ba1e-2ad93c1f0069","Type":"ContainerStarted","Data":"530edb694f9570f915ed9dd6d795448db4a0e678fa4cfbd4e30224020708c0f1"} Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.529552 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.529570 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-sz584" event={"ID":"b24d1500-f066-4bb7-8702-5e2e4075f49b","Type":"ContainerDied","Data":"ae73d7f5bfde32641e3ef94ad32a5cffd9d2017ba01d48e660a7cdca43f07b79"} Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.532096 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f323a5b2-6517-4f06-baec-308207807af3","Type":"ContainerStarted","Data":"9ef78dfd10a3ffd0feafc692beb6135aa66f52bacd7f5377659f02425ee82a62"} Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.533350 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f2a15446-559d-442b-859c-783ab8e7a828","Type":"ContainerStarted","Data":"0294d2de204483bfa1709c78677b2f48ba5c5e0a6428b05688acf99098f05c92"} Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.534826 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" event={"ID":"9b45218d-7c77-456f-91dc-0b9b9365d610","Type":"ContainerDied","Data":"50fd520633bb292f600d685f83a6c5b2f47749292043a2610ddde7f1acf90769"} Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.534915 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2f2tk" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.552040 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wjbc7" event={"ID":"19908dab-b232-4cd8-b45b-079cebdee593","Type":"ContainerStarted","Data":"4a6c355c0ed6362470b290a3018b28b3ec098f40fdcc94e63768ebb879dd2796"} Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.587049 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jt2w2" podStartSLOduration=4.57136106 podStartE2EDuration="35.587013191s" podCreationTimestamp="2026-02-03 06:16:20 +0000 UTC" firstStartedPulling="2026-02-03 06:16:23.003735308 +0000 UTC m=+953.586426722" lastFinishedPulling="2026-02-03 06:16:54.019387439 +0000 UTC m=+984.602078853" observedRunningTime="2026-02-03 06:16:55.576500759 +0000 UTC m=+986.159192173" watchObservedRunningTime="2026-02-03 06:16:55.587013191 +0000 UTC m=+986.169704605" Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.666851 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sz584"] Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.685045 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sz584"] Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.705517 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f2tk"] Feb 03 06:16:55 crc kubenswrapper[4872]: I0203 06:16:55.717401 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f2tk"] Feb 03 06:16:56 crc kubenswrapper[4872]: I0203 06:16:56.001097 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 06:16:56 crc kubenswrapper[4872]: I0203 06:16:56.133300 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b45218d-7c77-456f-91dc-0b9b9365d610" path="/var/lib/kubelet/pods/9b45218d-7c77-456f-91dc-0b9b9365d610/volumes" Feb 03 06:16:56 crc kubenswrapper[4872]: I0203 06:16:56.134059 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24d1500-f066-4bb7-8702-5e2e4075f49b" path="/var/lib/kubelet/pods/b24d1500-f066-4bb7-8702-5e2e4075f49b/volumes" Feb 03 06:16:56 crc kubenswrapper[4872]: W0203 06:16:56.137725 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82f89f1b_12ce_4720_9af7_3d8acb128b65.slice/crio-2e6fb0341d2dfdc8d137961d438e601dbc4b9b26d320cea9d3611db8b10fec7d WatchSource:0}: Error finding container 2e6fb0341d2dfdc8d137961d438e601dbc4b9b26d320cea9d3611db8b10fec7d: Status 404 returned error can't find the container with id 2e6fb0341d2dfdc8d137961d438e601dbc4b9b26d320cea9d3611db8b10fec7d Feb 03 06:16:56 crc kubenswrapper[4872]: I0203 06:16:56.403420 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zbxs4"] Feb 03 06:16:56 crc kubenswrapper[4872]: I0203 06:16:56.559809 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"82f89f1b-12ce-4720-9af7-3d8acb128b65","Type":"ContainerStarted","Data":"2e6fb0341d2dfdc8d137961d438e601dbc4b9b26d320cea9d3611db8b10fec7d"} Feb 03 06:16:57 crc kubenswrapper[4872]: I0203 06:16:57.569064 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zbxs4" event={"ID":"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0","Type":"ContainerStarted","Data":"07fd5cb12c2a3816480556ce6541ca004fe779825c0f5d92ce498d2443b35778"} Feb 03 06:16:59 crc kubenswrapper[4872]: I0203 06:16:59.275916 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 03 06:17:00 crc kubenswrapper[4872]: I0203 06:17:00.764715 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:17:00 crc kubenswrapper[4872]: I0203 06:17:00.764762 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:17:01 crc kubenswrapper[4872]: I0203 06:17:01.602863 4872 generic.go:334] "Generic (PLEG): container finished" podID="57e939fc-8c23-4843-a7ec-4cbd82d8cff7" containerID="c5d64e17ca82d9e9e42dd2fecd14247b9731047f5475e5082aa65d8cbbf5177c" exitCode=0 Feb 03 06:17:01 crc kubenswrapper[4872]: I0203 06:17:01.602955 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e939fc-8c23-4843-a7ec-4cbd82d8cff7","Type":"ContainerDied","Data":"c5d64e17ca82d9e9e42dd2fecd14247b9731047f5475e5082aa65d8cbbf5177c"} Feb 03 06:17:01 crc kubenswrapper[4872]: I0203 06:17:01.605309 4872 generic.go:334] "Generic (PLEG): container finished" podID="5a46bed9-4154-4a62-8805-fe67c55a2d89" containerID="379f79ff0c233c903f8088423c83ac8670c16474a4dd137d96ddd7331ff42aed" exitCode=0 Feb 03 06:17:01 crc kubenswrapper[4872]: I0203 06:17:01.605332 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a46bed9-4154-4a62-8805-fe67c55a2d89","Type":"ContainerDied","Data":"379f79ff0c233c903f8088423c83ac8670c16474a4dd137d96ddd7331ff42aed"} Feb 03 06:17:01 crc kubenswrapper[4872]: I0203 06:17:01.814773 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jt2w2" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="registry-server" probeResult="failure" output=< Feb 03 06:17:01 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:17:01 crc kubenswrapper[4872]: > Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.613607 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"57e939fc-8c23-4843-a7ec-4cbd82d8cff7","Type":"ContainerStarted","Data":"3a88b98b37bf0a5f1d7bc0e59b8247a875abd6be6460a780c2977d4aacdf9f9e"} Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.617447 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a46bed9-4154-4a62-8805-fe67c55a2d89","Type":"ContainerStarted","Data":"b20fe32300612c17f92ddfb52acc4b27aae83ec7c69689f70710a24858deff5a"} Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.618952 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"82f89f1b-12ce-4720-9af7-3d8acb128b65","Type":"ContainerStarted","Data":"3bafc33c013f9a0133378d5adb971db22acd8c5dc649192eab75ef07481c3dbf"} Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.620512 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wjbc7" event={"ID":"19908dab-b232-4cd8-b45b-079cebdee593","Type":"ContainerStarted","Data":"b54012df9956b2d151ffcb4da62ade5e387a343a459fe0ae3876959b32ff2761"} Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.622341 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zbxs4" event={"ID":"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0","Type":"ContainerStarted","Data":"c8b62bcb6834dd524552bcf26535045878cc6c1a1a4c38ed3461938ea9521d74"} Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.623665 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lv6c" event={"ID":"20d583df-d342-40a6-ba1e-2ad93c1f0069","Type":"ContainerStarted","Data":"d2f19934b426580c8ffbf7795de2f41c7176a4aa1a6c26362ccc026d9cf21eec"} Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.624925 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f323a5b2-6517-4f06-baec-308207807af3","Type":"ContainerStarted","Data":"3ab8a9cf8b8a97c6950f5f1243746068a2496b6bef33cd0ed7ff2437b31ad218"} Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.636716 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.854119137 podStartE2EDuration="41.636670467s" podCreationTimestamp="2026-02-03 06:16:21 +0000 UTC" firstStartedPulling="2026-02-03 06:16:23.63073414 +0000 UTC m=+954.213425554" lastFinishedPulling="2026-02-03 06:16:52.41328547 +0000 UTC m=+982.995976884" observedRunningTime="2026-02-03 06:17:02.633229185 +0000 UTC m=+993.215920629" watchObservedRunningTime="2026-02-03 06:17:02.636670467 +0000 UTC m=+993.219361881" Feb 03 06:17:02 crc kubenswrapper[4872]: I0203 06:17:02.678848 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371996.175943 podStartE2EDuration="40.678833382s" podCreationTimestamp="2026-02-03 06:16:22 +0000 UTC" firstStartedPulling="2026-02-03 06:16:24.731512428 +0000 UTC m=+955.314203842" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:02.671858934 +0000 UTC m=+993.254550378" watchObservedRunningTime="2026-02-03 06:17:02.678833382 +0000 UTC m=+993.261524786" Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.050615 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.050831 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.636616 4872 generic.go:334] "Generic (PLEG): container finished" podID="bfcd6876-7bc4-40d4-94af-6a5c175e7bb0" containerID="c8b62bcb6834dd524552bcf26535045878cc6c1a1a4c38ed3461938ea9521d74" exitCode=0 Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.636682 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zbxs4" event={"ID":"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0","Type":"ContainerDied","Data":"c8b62bcb6834dd524552bcf26535045878cc6c1a1a4c38ed3461938ea9521d74"} Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.646200 4872 generic.go:334] "Generic (PLEG): container finished" podID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerID="d2f19934b426580c8ffbf7795de2f41c7176a4aa1a6c26362ccc026d9cf21eec" exitCode=0 Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.646425 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lv6c" event={"ID":"20d583df-d342-40a6-ba1e-2ad93c1f0069","Type":"ContainerDied","Data":"d2f19934b426580c8ffbf7795de2f41c7176a4aa1a6c26362ccc026d9cf21eec"} Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.649192 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659","Type":"ContainerStarted","Data":"b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb"} Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.651799 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39c1e000-2f81-4251-a9b5-28563d87bb93","Type":"ContainerStarted","Data":"f04f5897ce704aabeaac3b226dd080524eb8f6014a4fa89c680a2b95c908d017"} Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.760079 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wjbc7" podStartSLOduration=27.443418782 podStartE2EDuration="33.760051983s" podCreationTimestamp="2026-02-03 06:16:30 +0000 UTC" firstStartedPulling="2026-02-03 06:16:54.484481878 +0000 UTC m=+985.067173292" lastFinishedPulling="2026-02-03 06:17:00.801115069 +0000 UTC m=+991.383806493" observedRunningTime="2026-02-03 06:17:03.720589943 +0000 UTC m=+994.303281367" watchObservedRunningTime="2026-02-03 06:17:03.760051983 +0000 UTC m=+994.342743437" Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.863645 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 03 06:17:03 crc kubenswrapper[4872]: I0203 06:17:03.863711 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 03 06:17:04 crc kubenswrapper[4872]: I0203 06:17:04.663285 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zbxs4" event={"ID":"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0","Type":"ContainerStarted","Data":"f8d4b6df9f695b5c469bf5fd458133eeade3ff9a9a78f4bab0c6a4104ec0d1a1"} Feb 03 06:17:04 crc kubenswrapper[4872]: I0203 06:17:04.667519 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f2a15446-559d-442b-859c-783ab8e7a828","Type":"ContainerStarted","Data":"64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19"} Feb 03 06:17:04 crc kubenswrapper[4872]: I0203 06:17:04.668023 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 06:17:04 crc kubenswrapper[4872]: I0203 06:17:04.690379 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=30.558544887 podStartE2EDuration="39.690355934s" podCreationTimestamp="2026-02-03 06:16:25 +0000 UTC" firstStartedPulling="2026-02-03 06:16:54.625028319 +0000 UTC m=+985.207719733" lastFinishedPulling="2026-02-03 06:17:03.756839356 +0000 UTC m=+994.339530780" observedRunningTime="2026-02-03 06:17:04.686576033 +0000 UTC m=+995.269267487" watchObservedRunningTime="2026-02-03 06:17:04.690355934 +0000 UTC m=+995.273047348" Feb 03 06:17:05 crc kubenswrapper[4872]: I0203 06:17:05.485546 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wjbc7" Feb 03 06:17:05 crc kubenswrapper[4872]: I0203 06:17:05.680348 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zbxs4" event={"ID":"bfcd6876-7bc4-40d4-94af-6a5c175e7bb0","Type":"ContainerStarted","Data":"146ff40c50303ca6cc7beab42ea3fe959c1ab112286bf7f11c2f560011590818"} Feb 03 06:17:05 crc kubenswrapper[4872]: I0203 06:17:05.680592 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:17:05 crc kubenswrapper[4872]: I0203 06:17:05.680635 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:17:05 crc kubenswrapper[4872]: I0203 06:17:05.711221 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zbxs4" podStartSLOduration=31.307722407 podStartE2EDuration="35.711174822s" podCreationTimestamp="2026-02-03 06:16:30 +0000 UTC" firstStartedPulling="2026-02-03 06:16:56.698270926 +0000 UTC m=+987.280962340" lastFinishedPulling="2026-02-03 06:17:01.101723341 +0000 UTC m=+991.684414755" observedRunningTime="2026-02-03 06:17:05.707333769 +0000 UTC m=+996.290025183" watchObservedRunningTime="2026-02-03 06:17:05.711174822 +0000 UTC m=+996.293866266" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.243131 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c7vnb"] Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.302769 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q96j5"] Feb 03 06:17:06 crc kubenswrapper[4872]: E0203 06:17:06.303082 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0614c415-b72c-4144-a715-033262112981" containerName="extract-utilities" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.303093 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0614c415-b72c-4144-a715-033262112981" containerName="extract-utilities" Feb 03 06:17:06 crc kubenswrapper[4872]: E0203 06:17:06.303102 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.303108 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" Feb 03 06:17:06 crc kubenswrapper[4872]: E0203 06:17:06.303120 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0614c415-b72c-4144-a715-033262112981" containerName="extract-content" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.303127 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0614c415-b72c-4144-a715-033262112981" containerName="extract-content" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.303297 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0614c415-b72c-4144-a715-033262112981" containerName="registry-server" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.306116 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.315934 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q96j5"] Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.365958 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-786xl\" (UniqueName: \"kubernetes.io/projected/7f8ac7a8-eeda-488e-9eca-c1113e720839-kube-api-access-786xl\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.366017 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.366053 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-config\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.467980 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-786xl\" (UniqueName: \"kubernetes.io/projected/7f8ac7a8-eeda-488e-9eca-c1113e720839-kube-api-access-786xl\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.468036 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.468070 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-config\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.469041 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-config\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.469520 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.488333 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-786xl\" (UniqueName: \"kubernetes.io/projected/7f8ac7a8-eeda-488e-9eca-c1113e720839-kube-api-access-786xl\") pod \"dnsmasq-dns-7cb5889db5-q96j5\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.630968 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.695928 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lv6c" event={"ID":"20d583df-d342-40a6-ba1e-2ad93c1f0069","Type":"ContainerStarted","Data":"f8254558496d0d29589ca88d3b0a1d538b2bd45b4cd0db4a6e491e4f261a4277"} Feb 03 06:17:06 crc kubenswrapper[4872]: I0203 06:17:06.713644 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2lv6c" podStartSLOduration=30.368234865 podStartE2EDuration="40.713626548s" podCreationTimestamp="2026-02-03 06:16:26 +0000 UTC" firstStartedPulling="2026-02-03 06:16:55.529959398 +0000 UTC m=+986.112650812" lastFinishedPulling="2026-02-03 06:17:05.875351041 +0000 UTC m=+996.458042495" observedRunningTime="2026-02-03 06:17:06.713438824 +0000 UTC m=+997.296130248" watchObservedRunningTime="2026-02-03 06:17:06.713626548 +0000 UTC m=+997.296317952" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.479120 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.486330 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.494186 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.494230 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.494326 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-cw7xm" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.494405 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.495132 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.588061 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53916dd7-8beb-48bb-8689-5693b2b3cf6f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.588110 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/53916dd7-8beb-48bb-8689-5693b2b3cf6f-lock\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.588156 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tzd4\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-kube-api-access-7tzd4\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.588279 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.588357 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.588385 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/53916dd7-8beb-48bb-8689-5693b2b3cf6f-cache\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.689632 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53916dd7-8beb-48bb-8689-5693b2b3cf6f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.689739 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/53916dd7-8beb-48bb-8689-5693b2b3cf6f-lock\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.689777 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tzd4\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-kube-api-access-7tzd4\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.690156 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.690306 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.690325 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/53916dd7-8beb-48bb-8689-5693b2b3cf6f-cache\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: E0203 06:17:07.690254 4872 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.690660 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/53916dd7-8beb-48bb-8689-5693b2b3cf6f-cache\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.690566 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: E0203 06:17:07.690670 4872 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.690347 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/53916dd7-8beb-48bb-8689-5693b2b3cf6f-lock\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: E0203 06:17:07.690753 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift podName:53916dd7-8beb-48bb-8689-5693b2b3cf6f nodeName:}" failed. No retries permitted until 2026-02-03 06:17:08.190738595 +0000 UTC m=+998.773430009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift") pod "swift-storage-0" (UID: "53916dd7-8beb-48bb-8689-5693b2b3cf6f") : configmap "swift-ring-files" not found Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.698370 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53916dd7-8beb-48bb-8689-5693b2b3cf6f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.720013 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tzd4\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-kube-api-access-7tzd4\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.731357 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.739465 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" event={"ID":"e79e2395-321d-4648-96e9-7f4d595aa9ba","Type":"ContainerDied","Data":"e06a2b36a54526065566868bf4de7cc2511a8c009f2da98d8186f1a3ebe4ed37"} Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.739664 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e06a2b36a54526065566868bf4de7cc2511a8c009f2da98d8186f1a3ebe4ed37" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.782103 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.894402 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-dns-svc\") pod \"e79e2395-321d-4648-96e9-7f4d595aa9ba\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.894743 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-config\") pod \"e79e2395-321d-4648-96e9-7f4d595aa9ba\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.894838 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7g5h\" (UniqueName: \"kubernetes.io/projected/e79e2395-321d-4648-96e9-7f4d595aa9ba-kube-api-access-b7g5h\") pod \"e79e2395-321d-4648-96e9-7f4d595aa9ba\" (UID: \"e79e2395-321d-4648-96e9-7f4d595aa9ba\") " Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.898184 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-config" (OuterVolumeSpecName: "config") pod "e79e2395-321d-4648-96e9-7f4d595aa9ba" (UID: "e79e2395-321d-4648-96e9-7f4d595aa9ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.898251 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e79e2395-321d-4648-96e9-7f4d595aa9ba" (UID: "e79e2395-321d-4648-96e9-7f4d595aa9ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.916932 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79e2395-321d-4648-96e9-7f4d595aa9ba-kube-api-access-b7g5h" (OuterVolumeSpecName: "kube-api-access-b7g5h") pod "e79e2395-321d-4648-96e9-7f4d595aa9ba" (UID: "e79e2395-321d-4648-96e9-7f4d595aa9ba"). InnerVolumeSpecName "kube-api-access-b7g5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.996328 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7g5h\" (UniqueName: \"kubernetes.io/projected/e79e2395-321d-4648-96e9-7f4d595aa9ba-kube-api-access-b7g5h\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.996359 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:07 crc kubenswrapper[4872]: I0203 06:17:07.996368 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79e2395-321d-4648-96e9-7f4d595aa9ba-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.177844 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q96j5"] Feb 03 06:17:08 crc kubenswrapper[4872]: W0203 06:17:08.184216 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f8ac7a8_eeda_488e_9eca_c1113e720839.slice/crio-91b898887e22edbb8af04af927ae6d1cc7a3e2e2b05ab5f45b74a49ccd18e676 WatchSource:0}: Error finding container 91b898887e22edbb8af04af927ae6d1cc7a3e2e2b05ab5f45b74a49ccd18e676: Status 404 returned error can't find the container with id 91b898887e22edbb8af04af927ae6d1cc7a3e2e2b05ab5f45b74a49ccd18e676 Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.198417 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:08 crc kubenswrapper[4872]: E0203 06:17:08.199987 4872 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 06:17:08 crc kubenswrapper[4872]: E0203 06:17:08.200011 4872 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 06:17:08 crc kubenswrapper[4872]: E0203 06:17:08.200052 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift podName:53916dd7-8beb-48bb-8689-5693b2b3cf6f nodeName:}" failed. No retries permitted until 2026-02-03 06:17:09.200034758 +0000 UTC m=+999.782726262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift") pod "swift-storage-0" (UID: "53916dd7-8beb-48bb-8689-5693b2b3cf6f") : configmap "swift-ring-files" not found Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.745790 4872 generic.go:334] "Generic (PLEG): container finished" podID="7f8ac7a8-eeda-488e-9eca-c1113e720839" containerID="cc650cb2673404ec98c76b2b9308ec7508892d05d8369ba568f77e5e9e5912ac" exitCode=0 Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.745853 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" event={"ID":"7f8ac7a8-eeda-488e-9eca-c1113e720839","Type":"ContainerDied","Data":"cc650cb2673404ec98c76b2b9308ec7508892d05d8369ba568f77e5e9e5912ac"} Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.746126 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" event={"ID":"7f8ac7a8-eeda-488e-9eca-c1113e720839","Type":"ContainerStarted","Data":"91b898887e22edbb8af04af927ae6d1cc7a3e2e2b05ab5f45b74a49ccd18e676"} Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.748359 4872 generic.go:334] "Generic (PLEG): container finished" podID="e2cbb325-483a-4595-8a97-ca8370d79996" containerID="bc2b1c0d476d9a6f2fa505e39ed1fdf042e6451110ec8eb0164ba6ffd70395e3" exitCode=0 Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.748414 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" event={"ID":"e2cbb325-483a-4595-8a97-ca8370d79996","Type":"ContainerDied","Data":"bc2b1c0d476d9a6f2fa505e39ed1fdf042e6451110ec8eb0164ba6ffd70395e3"} Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.750331 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f323a5b2-6517-4f06-baec-308207807af3","Type":"ContainerStarted","Data":"b022cf49713eefd244e20d266762d22d564e7e03cfca145d0ad744c40d2bc16e"} Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.755436 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c7vnb" Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.755560 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"82f89f1b-12ce-4720-9af7-3d8acb128b65","Type":"ContainerStarted","Data":"94bceb19450dd61ffb60c98aa02bc54af71bcae43a87dae03e8709c8bd3feaa4"} Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.793458 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.911266848 podStartE2EDuration="39.793442402s" podCreationTimestamp="2026-02-03 06:16:29 +0000 UTC" firstStartedPulling="2026-02-03 06:16:56.139103913 +0000 UTC m=+986.721795347" lastFinishedPulling="2026-02-03 06:17:08.021279487 +0000 UTC m=+998.603970901" observedRunningTime="2026-02-03 06:17:08.790894782 +0000 UTC m=+999.373586196" watchObservedRunningTime="2026-02-03 06:17:08.793442402 +0000 UTC m=+999.376133816" Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.903508 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.948162 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.288941378 podStartE2EDuration="37.948138165s" podCreationTimestamp="2026-02-03 06:16:31 +0000 UTC" firstStartedPulling="2026-02-03 06:16:55.494855174 +0000 UTC m=+986.077546588" lastFinishedPulling="2026-02-03 06:17:08.154051961 +0000 UTC m=+998.736743375" observedRunningTime="2026-02-03 06:17:08.832434181 +0000 UTC m=+999.415125595" watchObservedRunningTime="2026-02-03 06:17:08.948138165 +0000 UTC m=+999.530829589" Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.966199 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c7vnb"] Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.971824 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c7vnb"] Feb 03 06:17:08 crc kubenswrapper[4872]: I0203 06:17:08.988648 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.215400 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:09 crc kubenswrapper[4872]: E0203 06:17:09.215587 4872 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 06:17:09 crc kubenswrapper[4872]: E0203 06:17:09.215851 4872 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 06:17:09 crc kubenswrapper[4872]: E0203 06:17:09.215904 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift podName:53916dd7-8beb-48bb-8689-5693b2b3cf6f nodeName:}" failed. No retries permitted until 2026-02-03 06:17:11.215886556 +0000 UTC m=+1001.798577970 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift") pod "swift-storage-0" (UID: "53916dd7-8beb-48bb-8689-5693b2b3cf6f") : configmap "swift-ring-files" not found Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.437525 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.507850 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.766097 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" event={"ID":"7f8ac7a8-eeda-488e-9eca-c1113e720839","Type":"ContainerStarted","Data":"a4b13b5f282e4b064a96ce98f67dd19d869f411e43339f6df694bcedef052ab7"} Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.766608 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.768328 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" event={"ID":"e2cbb325-483a-4595-8a97-ca8370d79996","Type":"ContainerStarted","Data":"ebae43cb41605e0d4d4117dd14048e46a3146938ae4e5f19ccfe62beb9a13faf"} Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.768646 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.768661 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.794582 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" podStartSLOduration=3.794561577 podStartE2EDuration="3.794561577s" podCreationTimestamp="2026-02-03 06:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:09.789358882 +0000 UTC m=+1000.372050296" watchObservedRunningTime="2026-02-03 06:17:09.794561577 +0000 UTC m=+1000.377253031" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.822359 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.825057 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.838528 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" podStartSLOduration=2.837912565 podStartE2EDuration="49.838505485s" podCreationTimestamp="2026-02-03 06:16:20 +0000 UTC" firstStartedPulling="2026-02-03 06:16:21.008926373 +0000 UTC m=+951.591617777" lastFinishedPulling="2026-02-03 06:17:08.009519283 +0000 UTC m=+998.592210697" observedRunningTime="2026-02-03 06:17:09.815006239 +0000 UTC m=+1000.397697653" watchObservedRunningTime="2026-02-03 06:17:09.838505485 +0000 UTC m=+1000.421196899" Feb 03 06:17:09 crc kubenswrapper[4872]: I0203 06:17:09.967171 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vv8sn"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.026171 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-9bdmn"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.027521 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.033597 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.108292 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-9bdmn"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.130705 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.130774 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-config\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.130826 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-dns-svc\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.130898 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxql\" (UniqueName: \"kubernetes.io/projected/a34c1dfd-830f-4688-9389-f78b80e4eac7-kube-api-access-cjxql\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.139747 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79e2395-321d-4648-96e9-7f4d595aa9ba" path="/var/lib/kubelet/pods/e79e2395-321d-4648-96e9-7f4d595aa9ba/volumes" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.147929 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xsclk"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.148794 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.154555 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.183606 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xsclk"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.232341 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxql\" (UniqueName: \"kubernetes.io/projected/a34c1dfd-830f-4688-9389-f78b80e4eac7-kube-api-access-cjxql\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.232637 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.232704 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-config\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.232766 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-dns-svc\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.233876 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.234436 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-config\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.234667 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-dns-svc\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.248407 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q96j5"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.290517 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cpkjz"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.292036 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.297478 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cpkjz"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.303489 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxql\" (UniqueName: \"kubernetes.io/projected/a34c1dfd-830f-4688-9389-f78b80e4eac7-kube-api-access-cjxql\") pod \"dnsmasq-dns-57d65f699f-9bdmn\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.310915 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.333881 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e868189d-0fbd-45bc-83cb-9b71f951c53f-ovn-rundir\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.333924 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e868189d-0fbd-45bc-83cb-9b71f951c53f-combined-ca-bundle\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.333976 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e868189d-0fbd-45bc-83cb-9b71f951c53f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.334043 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6llk\" (UniqueName: \"kubernetes.io/projected/e868189d-0fbd-45bc-83cb-9b71f951c53f-kube-api-access-q6llk\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.334079 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e868189d-0fbd-45bc-83cb-9b71f951c53f-ovs-rundir\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.334095 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e868189d-0fbd-45bc-83cb-9b71f951c53f-config\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.349865 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.359822 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.361133 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.368032 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-fxcb7" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.368195 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.368323 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.368456 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.373908 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.431395 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435665 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e868189d-0fbd-45bc-83cb-9b71f951c53f-config\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435718 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e868189d-0fbd-45bc-83cb-9b71f951c53f-ovs-rundir\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435798 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqmrs\" (UniqueName: \"kubernetes.io/projected/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-kube-api-access-pqmrs\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435817 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e868189d-0fbd-45bc-83cb-9b71f951c53f-ovn-rundir\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435864 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e868189d-0fbd-45bc-83cb-9b71f951c53f-combined-ca-bundle\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435885 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-config\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435900 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435946 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.435984 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e868189d-0fbd-45bc-83cb-9b71f951c53f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.436045 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.436106 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6llk\" (UniqueName: \"kubernetes.io/projected/e868189d-0fbd-45bc-83cb-9b71f951c53f-kube-api-access-q6llk\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.436499 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e868189d-0fbd-45bc-83cb-9b71f951c53f-config\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.436845 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e868189d-0fbd-45bc-83cb-9b71f951c53f-ovs-rundir\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.436868 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e868189d-0fbd-45bc-83cb-9b71f951c53f-ovn-rundir\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.441832 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e868189d-0fbd-45bc-83cb-9b71f951c53f-combined-ca-bundle\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.442074 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e868189d-0fbd-45bc-83cb-9b71f951c53f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.464189 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6llk\" (UniqueName: \"kubernetes.io/projected/e868189d-0fbd-45bc-83cb-9b71f951c53f-kube-api-access-q6llk\") pod \"ovn-controller-metrics-xsclk\" (UID: \"e868189d-0fbd-45bc-83cb-9b71f951c53f\") " pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.478815 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xsclk" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.537635 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.537706 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.538860 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.538978 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540409 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e1641148-8016-42db-879c-29e9e04666f3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540447 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540472 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1641148-8016-42db-879c-29e9e04666f3-scripts\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540496 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx95c\" (UniqueName: \"kubernetes.io/projected/e1641148-8016-42db-879c-29e9e04666f3-kube-api-access-tx95c\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540516 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqmrs\" (UniqueName: \"kubernetes.io/projected/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-kube-api-access-pqmrs\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540536 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1641148-8016-42db-879c-29e9e04666f3-config\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540566 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-config\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540581 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.540608 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.541480 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.543337 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-config\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.544458 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.567139 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqmrs\" (UniqueName: \"kubernetes.io/projected/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-kube-api-access-pqmrs\") pod \"dnsmasq-dns-b8fbc5445-cpkjz\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.641853 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e1641148-8016-42db-879c-29e9e04666f3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.641892 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.641921 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1641148-8016-42db-879c-29e9e04666f3-scripts\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.641947 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx95c\" (UniqueName: \"kubernetes.io/projected/e1641148-8016-42db-879c-29e9e04666f3-kube-api-access-tx95c\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.641974 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1641148-8016-42db-879c-29e9e04666f3-config\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.642037 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.642074 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.646166 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1641148-8016-42db-879c-29e9e04666f3-scripts\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.647311 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e1641148-8016-42db-879c-29e9e04666f3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.648844 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.652504 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1641148-8016-42db-879c-29e9e04666f3-config\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.657360 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.657545 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.668349 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1641148-8016-42db-879c-29e9e04666f3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.670976 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx95c\" (UniqueName: \"kubernetes.io/projected/e1641148-8016-42db-879c-29e9e04666f3-kube-api-access-tx95c\") pod \"ovn-northd-0\" (UID: \"e1641148-8016-42db-879c-29e9e04666f3\") " pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.775871 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" containerName="dnsmasq-dns" containerID="cri-o://ebae43cb41605e0d4d4117dd14048e46a3146938ae4e5f19ccfe62beb9a13faf" gracePeriod=10 Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.804155 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 06:17:10 crc kubenswrapper[4872]: I0203 06:17:10.918510 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-9bdmn"] Feb 03 06:17:10 crc kubenswrapper[4872]: W0203 06:17:10.930833 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda34c1dfd_830f_4688_9389_f78b80e4eac7.slice/crio-0f214aa1f6ce7390e51def2fcaac6aacb319890cc9173686f22d2a728794c48e WatchSource:0}: Error finding container 0f214aa1f6ce7390e51def2fcaac6aacb319890cc9173686f22d2a728794c48e: Status 404 returned error can't find the container with id 0f214aa1f6ce7390e51def2fcaac6aacb319890cc9173686f22d2a728794c48e Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.013579 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xsclk"] Feb 03 06:17:11 crc kubenswrapper[4872]: W0203 06:17:11.025714 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode868189d_0fbd_45bc_83cb_9b71f951c53f.slice/crio-088412492ecc3eb5d1f9ad1035b045e4c35ae7cdb2a3c0503562d9ddfe66c054 WatchSource:0}: Error finding container 088412492ecc3eb5d1f9ad1035b045e4c35ae7cdb2a3c0503562d9ddfe66c054: Status 404 returned error can't find the container with id 088412492ecc3eb5d1f9ad1035b045e4c35ae7cdb2a3c0503562d9ddfe66c054 Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.090169 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cpkjz"] Feb 03 06:17:11 crc kubenswrapper[4872]: W0203 06:17:11.097466 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3433831_e142_4ec1_b8ce_dc1d064c3ffa.slice/crio-23ce661d5a23143a978b73c97c33fec8acbf16cc6e8bcceebdeadc211831d327 WatchSource:0}: Error finding container 23ce661d5a23143a978b73c97c33fec8acbf16cc6e8bcceebdeadc211831d327: Status 404 returned error can't find the container with id 23ce661d5a23143a978b73c97c33fec8acbf16cc6e8bcceebdeadc211831d327 Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.198774 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 06:17:11 crc kubenswrapper[4872]: W0203 06:17:11.208592 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1641148_8016_42db_879c_29e9e04666f3.slice/crio-204f59b1457a90824181e43ab5328c5eb2ab61c9974833a28f2f1628f72324a7 WatchSource:0}: Error finding container 204f59b1457a90824181e43ab5328c5eb2ab61c9974833a28f2f1628f72324a7: Status 404 returned error can't find the container with id 204f59b1457a90824181e43ab5328c5eb2ab61c9974833a28f2f1628f72324a7 Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.236987 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-m4mq8"] Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.237931 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.241319 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.242299 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.242493 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.251650 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:11 crc kubenswrapper[4872]: E0203 06:17:11.251784 4872 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 06:17:11 crc kubenswrapper[4872]: E0203 06:17:11.251813 4872 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 06:17:11 crc kubenswrapper[4872]: E0203 06:17:11.251865 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift podName:53916dd7-8beb-48bb-8689-5693b2b3cf6f nodeName:}" failed. No retries permitted until 2026-02-03 06:17:15.251848296 +0000 UTC m=+1005.834539700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift") pod "swift-storage-0" (UID: "53916dd7-8beb-48bb-8689-5693b2b3cf6f") : configmap "swift-ring-files" not found Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.259088 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-m4mq8"] Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.353459 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-scripts\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.353545 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-swiftconf\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.353565 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-dispersionconf\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.353599 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-ring-data-devices\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.353620 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phnwn\" (UniqueName: \"kubernetes.io/projected/27de9be5-8c0c-4283-81ab-6ec3706d94c7-kube-api-access-phnwn\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.353638 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-combined-ca-bundle\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.353659 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27de9be5-8c0c-4283-81ab-6ec3706d94c7-etc-swift\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.455828 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-ring-data-devices\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.455900 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phnwn\" (UniqueName: \"kubernetes.io/projected/27de9be5-8c0c-4283-81ab-6ec3706d94c7-kube-api-access-phnwn\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.455941 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-combined-ca-bundle\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.455988 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27de9be5-8c0c-4283-81ab-6ec3706d94c7-etc-swift\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.456100 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-scripts\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.456231 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-swiftconf\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.456272 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-dispersionconf\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.456921 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27de9be5-8c0c-4283-81ab-6ec3706d94c7-etc-swift\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.457940 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-scripts\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.458061 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-ring-data-devices\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.459965 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-dispersionconf\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.460063 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-combined-ca-bundle\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.460332 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-swiftconf\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.475532 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phnwn\" (UniqueName: \"kubernetes.io/projected/27de9be5-8c0c-4283-81ab-6ec3706d94c7-kube-api-access-phnwn\") pod \"swift-ring-rebalance-m4mq8\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.552250 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.784938 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" event={"ID":"a3433831-e142-4ec1-b8ce-dc1d064c3ffa","Type":"ContainerStarted","Data":"23ce661d5a23143a978b73c97c33fec8acbf16cc6e8bcceebdeadc211831d327"} Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.786698 4872 generic.go:334] "Generic (PLEG): container finished" podID="e2cbb325-483a-4595-8a97-ca8370d79996" containerID="ebae43cb41605e0d4d4117dd14048e46a3146938ae4e5f19ccfe62beb9a13faf" exitCode=0 Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.786740 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" event={"ID":"e2cbb325-483a-4595-8a97-ca8370d79996","Type":"ContainerDied","Data":"ebae43cb41605e0d4d4117dd14048e46a3146938ae4e5f19ccfe62beb9a13faf"} Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.787942 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" event={"ID":"a34c1dfd-830f-4688-9389-f78b80e4eac7","Type":"ContainerStarted","Data":"0f214aa1f6ce7390e51def2fcaac6aacb319890cc9173686f22d2a728794c48e"} Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.788676 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xsclk" event={"ID":"e868189d-0fbd-45bc-83cb-9b71f951c53f","Type":"ContainerStarted","Data":"088412492ecc3eb5d1f9ad1035b045e4c35ae7cdb2a3c0503562d9ddfe66c054"} Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.789788 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e1641148-8016-42db-879c-29e9e04666f3","Type":"ContainerStarted","Data":"204f59b1457a90824181e43ab5328c5eb2ab61c9974833a28f2f1628f72324a7"} Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.790422 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" podUID="7f8ac7a8-eeda-488e-9eca-c1113e720839" containerName="dnsmasq-dns" containerID="cri-o://a4b13b5f282e4b064a96ce98f67dd19d869f411e43339f6df694bcedef052ab7" gracePeriod=10 Feb 03 06:17:11 crc kubenswrapper[4872]: I0203 06:17:11.810585 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jt2w2" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="registry-server" probeResult="failure" output=< Feb 03 06:17:11 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:17:11 crc kubenswrapper[4872]: > Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.012650 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-m4mq8"] Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.318375 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.472670 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrgj4\" (UniqueName: \"kubernetes.io/projected/e2cbb325-483a-4595-8a97-ca8370d79996-kube-api-access-nrgj4\") pod \"e2cbb325-483a-4595-8a97-ca8370d79996\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.472854 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-config\") pod \"e2cbb325-483a-4595-8a97-ca8370d79996\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.472957 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-dns-svc\") pod \"e2cbb325-483a-4595-8a97-ca8370d79996\" (UID: \"e2cbb325-483a-4595-8a97-ca8370d79996\") " Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.479943 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2cbb325-483a-4595-8a97-ca8370d79996-kube-api-access-nrgj4" (OuterVolumeSpecName: "kube-api-access-nrgj4") pod "e2cbb325-483a-4595-8a97-ca8370d79996" (UID: "e2cbb325-483a-4595-8a97-ca8370d79996"). InnerVolumeSpecName "kube-api-access-nrgj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.512002 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2cbb325-483a-4595-8a97-ca8370d79996" (UID: "e2cbb325-483a-4595-8a97-ca8370d79996"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.514014 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-config" (OuterVolumeSpecName: "config") pod "e2cbb325-483a-4595-8a97-ca8370d79996" (UID: "e2cbb325-483a-4595-8a97-ca8370d79996"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.575235 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.575271 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrgj4\" (UniqueName: \"kubernetes.io/projected/e2cbb325-483a-4595-8a97-ca8370d79996-kube-api-access-nrgj4\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.575288 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2cbb325-483a-4595-8a97-ca8370d79996-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.800182 4872 generic.go:334] "Generic (PLEG): container finished" podID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerID="295e6a9491c1300120487148ce28c102be2d043742a71739aee77407642841fc" exitCode=0 Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.800250 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" event={"ID":"a3433831-e142-4ec1-b8ce-dc1d064c3ffa","Type":"ContainerDied","Data":"295e6a9491c1300120487148ce28c102be2d043742a71739aee77407642841fc"} Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.802836 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m4mq8" event={"ID":"27de9be5-8c0c-4283-81ab-6ec3706d94c7","Type":"ContainerStarted","Data":"8f6f6d529b54c8b1ac33a3989e08581988ce9212d8fbcdbf64c356406f7d2cf6"} Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.817699 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" event={"ID":"e2cbb325-483a-4595-8a97-ca8370d79996","Type":"ContainerDied","Data":"5f409ec92206c18b728b6412dba2be18e0d8acb9cd6d92ac180064d508e4f449"} Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.818041 4872 scope.go:117] "RemoveContainer" containerID="ebae43cb41605e0d4d4117dd14048e46a3146938ae4e5f19ccfe62beb9a13faf" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.817665 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vv8sn" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.829013 4872 generic.go:334] "Generic (PLEG): container finished" podID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerID="ae4b00d79401b82fe36e855571ef20f3cd14dd0af6141a959211590617ea26e0" exitCode=0 Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.829085 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" event={"ID":"a34c1dfd-830f-4688-9389-f78b80e4eac7","Type":"ContainerDied","Data":"ae4b00d79401b82fe36e855571ef20f3cd14dd0af6141a959211590617ea26e0"} Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.851239 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xsclk" event={"ID":"e868189d-0fbd-45bc-83cb-9b71f951c53f","Type":"ContainerStarted","Data":"136f956913aa77b8d4665dbafc5d1c0a7d4ee861c5e48d92d3deb0a66251b40a"} Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.857471 4872 generic.go:334] "Generic (PLEG): container finished" podID="7f8ac7a8-eeda-488e-9eca-c1113e720839" containerID="a4b13b5f282e4b064a96ce98f67dd19d869f411e43339f6df694bcedef052ab7" exitCode=0 Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.857512 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" event={"ID":"7f8ac7a8-eeda-488e-9eca-c1113e720839","Type":"ContainerDied","Data":"a4b13b5f282e4b064a96ce98f67dd19d869f411e43339f6df694bcedef052ab7"} Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.894734 4872 scope.go:117] "RemoveContainer" containerID="bc2b1c0d476d9a6f2fa505e39ed1fdf042e6451110ec8eb0164ba6ffd70395e3" Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.894934 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vv8sn"] Feb 03 06:17:12 crc kubenswrapper[4872]: I0203 06:17:12.904016 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vv8sn"] Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.626348 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.799484 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-config\") pod \"7f8ac7a8-eeda-488e-9eca-c1113e720839\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.799530 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-786xl\" (UniqueName: \"kubernetes.io/projected/7f8ac7a8-eeda-488e-9eca-c1113e720839-kube-api-access-786xl\") pod \"7f8ac7a8-eeda-488e-9eca-c1113e720839\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.799552 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-dns-svc\") pod \"7f8ac7a8-eeda-488e-9eca-c1113e720839\" (UID: \"7f8ac7a8-eeda-488e-9eca-c1113e720839\") " Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.811736 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8ac7a8-eeda-488e-9eca-c1113e720839-kube-api-access-786xl" (OuterVolumeSpecName: "kube-api-access-786xl") pod "7f8ac7a8-eeda-488e-9eca-c1113e720839" (UID: "7f8ac7a8-eeda-488e-9eca-c1113e720839"). InnerVolumeSpecName "kube-api-access-786xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.851287 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-config" (OuterVolumeSpecName: "config") pod "7f8ac7a8-eeda-488e-9eca-c1113e720839" (UID: "7f8ac7a8-eeda-488e-9eca-c1113e720839"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.851905 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f8ac7a8-eeda-488e-9eca-c1113e720839" (UID: "7f8ac7a8-eeda-488e-9eca-c1113e720839"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.869943 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" event={"ID":"a34c1dfd-830f-4688-9389-f78b80e4eac7","Type":"ContainerStarted","Data":"b542db1f2a7bec42b2c0ce16527825b2e903c2992e9a62960514d145279c2211"} Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.877246 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.877815 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-q96j5" event={"ID":"7f8ac7a8-eeda-488e-9eca-c1113e720839","Type":"ContainerDied","Data":"91b898887e22edbb8af04af927ae6d1cc7a3e2e2b05ab5f45b74a49ccd18e676"} Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.877954 4872 scope.go:117] "RemoveContainer" containerID="a4b13b5f282e4b064a96ce98f67dd19d869f411e43339f6df694bcedef052ab7" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.895499 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xsclk" podStartSLOduration=3.895477574 podStartE2EDuration="3.895477574s" podCreationTimestamp="2026-02-03 06:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:13.889816428 +0000 UTC m=+1004.472507842" watchObservedRunningTime="2026-02-03 06:17:13.895477574 +0000 UTC m=+1004.478168988" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.902872 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.902900 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-786xl\" (UniqueName: \"kubernetes.io/projected/7f8ac7a8-eeda-488e-9eca-c1113e720839-kube-api-access-786xl\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.902911 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f8ac7a8-eeda-488e-9eca-c1113e720839-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.906779 4872 scope.go:117] "RemoveContainer" containerID="cc650cb2673404ec98c76b2b9308ec7508892d05d8369ba568f77e5e9e5912ac" Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.927929 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q96j5"] Feb 03 06:17:13 crc kubenswrapper[4872]: I0203 06:17:13.935576 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-q96j5"] Feb 03 06:17:14 crc kubenswrapper[4872]: I0203 06:17:14.132811 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8ac7a8-eeda-488e-9eca-c1113e720839" path="/var/lib/kubelet/pods/7f8ac7a8-eeda-488e-9eca-c1113e720839/volumes" Feb 03 06:17:14 crc kubenswrapper[4872]: I0203 06:17:14.134076 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" path="/var/lib/kubelet/pods/e2cbb325-483a-4595-8a97-ca8370d79996/volumes" Feb 03 06:17:14 crc kubenswrapper[4872]: I0203 06:17:14.892428 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" event={"ID":"a3433831-e142-4ec1-b8ce-dc1d064c3ffa","Type":"ContainerStarted","Data":"b357e097bab4dc87b01c855216d3fd633e15df64dc5a0c244d30e53718b98670"} Feb 03 06:17:14 crc kubenswrapper[4872]: I0203 06:17:14.892536 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:14 crc kubenswrapper[4872]: I0203 06:17:14.897355 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:14 crc kubenswrapper[4872]: I0203 06:17:14.918281 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" podStartSLOduration=4.91825863 podStartE2EDuration="4.91825863s" podCreationTimestamp="2026-02-03 06:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:14.910949674 +0000 UTC m=+1005.493641108" watchObservedRunningTime="2026-02-03 06:17:14.91825863 +0000 UTC m=+1005.500950044" Feb 03 06:17:14 crc kubenswrapper[4872]: I0203 06:17:14.936371 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" podStartSLOduration=4.936349095 podStartE2EDuration="4.936349095s" podCreationTimestamp="2026-02-03 06:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:14.930551586 +0000 UTC m=+1005.513243030" watchObservedRunningTime="2026-02-03 06:17:14.936349095 +0000 UTC m=+1005.519040529" Feb 03 06:17:15 crc kubenswrapper[4872]: I0203 06:17:15.325032 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:15 crc kubenswrapper[4872]: E0203 06:17:15.325265 4872 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 06:17:15 crc kubenswrapper[4872]: E0203 06:17:15.325290 4872 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 06:17:15 crc kubenswrapper[4872]: E0203 06:17:15.325380 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift podName:53916dd7-8beb-48bb-8689-5693b2b3cf6f nodeName:}" failed. No retries permitted until 2026-02-03 06:17:23.325337624 +0000 UTC m=+1013.908029048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift") pod "swift-storage-0" (UID: "53916dd7-8beb-48bb-8689-5693b2b3cf6f") : configmap "swift-ring-files" not found Feb 03 06:17:16 crc kubenswrapper[4872]: I0203 06:17:16.027525 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 06:17:16 crc kubenswrapper[4872]: I0203 06:17:16.560768 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:17:16 crc kubenswrapper[4872]: I0203 06:17:16.561500 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:17:16 crc kubenswrapper[4872]: I0203 06:17:16.629743 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:17:16 crc kubenswrapper[4872]: I0203 06:17:16.988629 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:17:17 crc kubenswrapper[4872]: I0203 06:17:17.594781 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2lv6c"] Feb 03 06:17:17 crc kubenswrapper[4872]: I0203 06:17:17.902198 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 03 06:17:18 crc kubenswrapper[4872]: I0203 06:17:18.005068 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="57e939fc-8c23-4843-a7ec-4cbd82d8cff7" containerName="galera" probeResult="failure" output=< Feb 03 06:17:18 crc kubenswrapper[4872]: wsrep_local_state_comment (Joined) differs from Synced Feb 03 06:17:18 crc kubenswrapper[4872]: > Feb 03 06:17:18 crc kubenswrapper[4872]: I0203 06:17:18.952174 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2lv6c" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerName="registry-server" containerID="cri-o://f8254558496d0d29589ca88d3b0a1d538b2bd45b4cd0db4a6e491e4f261a4277" gracePeriod=2 Feb 03 06:17:19 crc kubenswrapper[4872]: I0203 06:17:19.965184 4872 generic.go:334] "Generic (PLEG): container finished" podID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerID="f8254558496d0d29589ca88d3b0a1d538b2bd45b4cd0db4a6e491e4f261a4277" exitCode=0 Feb 03 06:17:19 crc kubenswrapper[4872]: I0203 06:17:19.965228 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lv6c" event={"ID":"20d583df-d342-40a6-ba1e-2ad93c1f0069","Type":"ContainerDied","Data":"f8254558496d0d29589ca88d3b0a1d538b2bd45b4cd0db4a6e491e4f261a4277"} Feb 03 06:17:20 crc kubenswrapper[4872]: I0203 06:17:20.352859 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:20 crc kubenswrapper[4872]: I0203 06:17:20.650823 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:17:20 crc kubenswrapper[4872]: I0203 06:17:20.727450 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-9bdmn"] Feb 03 06:17:20 crc kubenswrapper[4872]: I0203 06:17:20.973941 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" podUID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerName="dnsmasq-dns" containerID="cri-o://b542db1f2a7bec42b2c0ce16527825b2e903c2992e9a62960514d145279c2211" gracePeriod=10 Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.321299 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.447102 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-catalog-content\") pod \"20d583df-d342-40a6-ba1e-2ad93c1f0069\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.447255 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jm2b\" (UniqueName: \"kubernetes.io/projected/20d583df-d342-40a6-ba1e-2ad93c1f0069-kube-api-access-2jm2b\") pod \"20d583df-d342-40a6-ba1e-2ad93c1f0069\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.447312 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-utilities\") pod \"20d583df-d342-40a6-ba1e-2ad93c1f0069\" (UID: \"20d583df-d342-40a6-ba1e-2ad93c1f0069\") " Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.462948 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d583df-d342-40a6-ba1e-2ad93c1f0069-kube-api-access-2jm2b" (OuterVolumeSpecName: "kube-api-access-2jm2b") pod "20d583df-d342-40a6-ba1e-2ad93c1f0069" (UID: "20d583df-d342-40a6-ba1e-2ad93c1f0069"). InnerVolumeSpecName "kube-api-access-2jm2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.463245 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-utilities" (OuterVolumeSpecName: "utilities") pod "20d583df-d342-40a6-ba1e-2ad93c1f0069" (UID: "20d583df-d342-40a6-ba1e-2ad93c1f0069"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.513993 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20d583df-d342-40a6-ba1e-2ad93c1f0069" (UID: "20d583df-d342-40a6-ba1e-2ad93c1f0069"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.549433 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.549464 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jm2b\" (UniqueName: \"kubernetes.io/projected/20d583df-d342-40a6-ba1e-2ad93c1f0069-kube-api-access-2jm2b\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.549474 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20d583df-d342-40a6-ba1e-2ad93c1f0069-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.697840 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.786906 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5a46bed9-4154-4a62-8805-fe67c55a2d89" containerName="galera" probeResult="failure" output=< Feb 03 06:17:21 crc kubenswrapper[4872]: wsrep_local_state_comment (Joined) differs from Synced Feb 03 06:17:21 crc kubenswrapper[4872]: > Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.809177 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jt2w2" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="registry-server" probeResult="failure" output=< Feb 03 06:17:21 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:17:21 crc kubenswrapper[4872]: > Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.983071 4872 generic.go:334] "Generic (PLEG): container finished" podID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerID="b542db1f2a7bec42b2c0ce16527825b2e903c2992e9a62960514d145279c2211" exitCode=0 Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.983143 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" event={"ID":"a34c1dfd-830f-4688-9389-f78b80e4eac7","Type":"ContainerDied","Data":"b542db1f2a7bec42b2c0ce16527825b2e903c2992e9a62960514d145279c2211"} Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.985219 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lv6c" event={"ID":"20d583df-d342-40a6-ba1e-2ad93c1f0069","Type":"ContainerDied","Data":"530edb694f9570f915ed9dd6d795448db4a0e678fa4cfbd4e30224020708c0f1"} Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.985274 4872 scope.go:117] "RemoveContainer" containerID="f8254558496d0d29589ca88d3b0a1d538b2bd45b4cd0db4a6e491e4f261a4277" Feb 03 06:17:21 crc kubenswrapper[4872]: I0203 06:17:21.985275 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lv6c" Feb 03 06:17:22 crc kubenswrapper[4872]: I0203 06:17:22.024534 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2lv6c"] Feb 03 06:17:22 crc kubenswrapper[4872]: I0203 06:17:22.031187 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2lv6c"] Feb 03 06:17:22 crc kubenswrapper[4872]: I0203 06:17:22.133117 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" path="/var/lib/kubelet/pods/20d583df-d342-40a6-ba1e-2ad93c1f0069/volumes" Feb 03 06:17:23 crc kubenswrapper[4872]: I0203 06:17:23.161786 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 03 06:17:23 crc kubenswrapper[4872]: I0203 06:17:23.388792 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:23 crc kubenswrapper[4872]: E0203 06:17:23.388971 4872 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 06:17:23 crc kubenswrapper[4872]: E0203 06:17:23.388985 4872 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 06:17:23 crc kubenswrapper[4872]: E0203 06:17:23.389025 4872 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift podName:53916dd7-8beb-48bb-8689-5693b2b3cf6f nodeName:}" failed. No retries permitted until 2026-02-03 06:17:39.389011063 +0000 UTC m=+1029.971702477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift") pod "swift-storage-0" (UID: "53916dd7-8beb-48bb-8689-5693b2b3cf6f") : configmap "swift-ring-files" not found Feb 03 06:17:23 crc kubenswrapper[4872]: I0203 06:17:23.979298 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.293796 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6t9pq"] Feb 03 06:17:24 crc kubenswrapper[4872]: E0203 06:17:24.294160 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerName="extract-content" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294172 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerName="extract-content" Feb 03 06:17:24 crc kubenswrapper[4872]: E0203 06:17:24.294184 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerName="extract-utilities" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294190 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerName="extract-utilities" Feb 03 06:17:24 crc kubenswrapper[4872]: E0203 06:17:24.294200 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" containerName="dnsmasq-dns" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294206 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" containerName="dnsmasq-dns" Feb 03 06:17:24 crc kubenswrapper[4872]: E0203 06:17:24.294214 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerName="registry-server" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294220 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerName="registry-server" Feb 03 06:17:24 crc kubenswrapper[4872]: E0203 06:17:24.294232 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" containerName="init" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294238 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" containerName="init" Feb 03 06:17:24 crc kubenswrapper[4872]: E0203 06:17:24.294259 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8ac7a8-eeda-488e-9eca-c1113e720839" containerName="init" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294264 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8ac7a8-eeda-488e-9eca-c1113e720839" containerName="init" Feb 03 06:17:24 crc kubenswrapper[4872]: E0203 06:17:24.294274 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8ac7a8-eeda-488e-9eca-c1113e720839" containerName="dnsmasq-dns" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294279 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8ac7a8-eeda-488e-9eca-c1113e720839" containerName="dnsmasq-dns" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294418 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8ac7a8-eeda-488e-9eca-c1113e720839" containerName="dnsmasq-dns" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294430 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d583df-d342-40a6-ba1e-2ad93c1f0069" containerName="registry-server" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294443 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2cbb325-483a-4595-8a97-ca8370d79996" containerName="dnsmasq-dns" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.294984 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.302500 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3446-account-create-update-4x4hf"] Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.308022 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.310624 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.311522 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6t9pq"] Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.317447 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3446-account-create-update-4x4hf"] Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.393291 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-h7nwv"] Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.394162 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.403206 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d299733c-5ce6-4a78-b151-9c5e81026c42-operator-scripts\") pod \"keystone-3446-account-create-update-4x4hf\" (UID: \"d299733c-5ce6-4a78-b151-9c5e81026c42\") " pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.403275 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7km7d\" (UniqueName: \"kubernetes.io/projected/d299733c-5ce6-4a78-b151-9c5e81026c42-kube-api-access-7km7d\") pod \"keystone-3446-account-create-update-4x4hf\" (UID: \"d299733c-5ce6-4a78-b151-9c5e81026c42\") " pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.403318 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d288030f-c7b0-415f-a75c-710290cfcd38-operator-scripts\") pod \"keystone-db-create-6t9pq\" (UID: \"d288030f-c7b0-415f-a75c-710290cfcd38\") " pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.403507 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbhm\" (UniqueName: \"kubernetes.io/projected/d288030f-c7b0-415f-a75c-710290cfcd38-kube-api-access-dwbhm\") pod \"keystone-db-create-6t9pq\" (UID: \"d288030f-c7b0-415f-a75c-710290cfcd38\") " pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.412591 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-h7nwv"] Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.421443 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-29e3-account-create-update-dw7r4"] Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.422578 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.425046 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.437501 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-29e3-account-create-update-dw7r4"] Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.504968 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c8616e4-d904-4f7c-9a38-d30207a53cb4-operator-scripts\") pod \"placement-29e3-account-create-update-dw7r4\" (UID: \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\") " pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.505010 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b88e7c-6a76-4769-b83f-bba7810fa54a-operator-scripts\") pod \"placement-db-create-h7nwv\" (UID: \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\") " pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.505035 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbhm\" (UniqueName: \"kubernetes.io/projected/d288030f-c7b0-415f-a75c-710290cfcd38-kube-api-access-dwbhm\") pod \"keystone-db-create-6t9pq\" (UID: \"d288030f-c7b0-415f-a75c-710290cfcd38\") " pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.505062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djbpz\" (UniqueName: \"kubernetes.io/projected/1c8616e4-d904-4f7c-9a38-d30207a53cb4-kube-api-access-djbpz\") pod \"placement-29e3-account-create-update-dw7r4\" (UID: \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\") " pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.505094 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xhzs\" (UniqueName: \"kubernetes.io/projected/b0b88e7c-6a76-4769-b83f-bba7810fa54a-kube-api-access-8xhzs\") pod \"placement-db-create-h7nwv\" (UID: \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\") " pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.505139 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d299733c-5ce6-4a78-b151-9c5e81026c42-operator-scripts\") pod \"keystone-3446-account-create-update-4x4hf\" (UID: \"d299733c-5ce6-4a78-b151-9c5e81026c42\") " pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.505169 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7km7d\" (UniqueName: \"kubernetes.io/projected/d299733c-5ce6-4a78-b151-9c5e81026c42-kube-api-access-7km7d\") pod \"keystone-3446-account-create-update-4x4hf\" (UID: \"d299733c-5ce6-4a78-b151-9c5e81026c42\") " pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.505195 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d288030f-c7b0-415f-a75c-710290cfcd38-operator-scripts\") pod \"keystone-db-create-6t9pq\" (UID: \"d288030f-c7b0-415f-a75c-710290cfcd38\") " pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.505821 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d288030f-c7b0-415f-a75c-710290cfcd38-operator-scripts\") pod \"keystone-db-create-6t9pq\" (UID: \"d288030f-c7b0-415f-a75c-710290cfcd38\") " pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.506573 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d299733c-5ce6-4a78-b151-9c5e81026c42-operator-scripts\") pod \"keystone-3446-account-create-update-4x4hf\" (UID: \"d299733c-5ce6-4a78-b151-9c5e81026c42\") " pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.522892 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7km7d\" (UniqueName: \"kubernetes.io/projected/d299733c-5ce6-4a78-b151-9c5e81026c42-kube-api-access-7km7d\") pod \"keystone-3446-account-create-update-4x4hf\" (UID: \"d299733c-5ce6-4a78-b151-9c5e81026c42\") " pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.534680 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbhm\" (UniqueName: \"kubernetes.io/projected/d288030f-c7b0-415f-a75c-710290cfcd38-kube-api-access-dwbhm\") pod \"keystone-db-create-6t9pq\" (UID: \"d288030f-c7b0-415f-a75c-710290cfcd38\") " pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.606369 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c8616e4-d904-4f7c-9a38-d30207a53cb4-operator-scripts\") pod \"placement-29e3-account-create-update-dw7r4\" (UID: \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\") " pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.606431 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b88e7c-6a76-4769-b83f-bba7810fa54a-operator-scripts\") pod \"placement-db-create-h7nwv\" (UID: \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\") " pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.606484 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djbpz\" (UniqueName: \"kubernetes.io/projected/1c8616e4-d904-4f7c-9a38-d30207a53cb4-kube-api-access-djbpz\") pod \"placement-29e3-account-create-update-dw7r4\" (UID: \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\") " pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.607540 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b88e7c-6a76-4769-b83f-bba7810fa54a-operator-scripts\") pod \"placement-db-create-h7nwv\" (UID: \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\") " pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.607606 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c8616e4-d904-4f7c-9a38-d30207a53cb4-operator-scripts\") pod \"placement-29e3-account-create-update-dw7r4\" (UID: \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\") " pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.607621 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xhzs\" (UniqueName: \"kubernetes.io/projected/b0b88e7c-6a76-4769-b83f-bba7810fa54a-kube-api-access-8xhzs\") pod \"placement-db-create-h7nwv\" (UID: \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\") " pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.611180 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.625653 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djbpz\" (UniqueName: \"kubernetes.io/projected/1c8616e4-d904-4f7c-9a38-d30207a53cb4-kube-api-access-djbpz\") pod \"placement-29e3-account-create-update-dw7r4\" (UID: \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\") " pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.627720 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.631616 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xhzs\" (UniqueName: \"kubernetes.io/projected/b0b88e7c-6a76-4769-b83f-bba7810fa54a-kube-api-access-8xhzs\") pod \"placement-db-create-h7nwv\" (UID: \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\") " pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.708734 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:24 crc kubenswrapper[4872]: I0203 06:17:24.749302 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:25 crc kubenswrapper[4872]: I0203 06:17:25.350916 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" podUID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.346442 4872 scope.go:117] "RemoveContainer" containerID="d2f19934b426580c8ffbf7795de2f41c7176a4aa1a6c26362ccc026d9cf21eec" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.453433 4872 scope.go:117] "RemoveContainer" containerID="43696870ce8028b2c886327133c44f16188abd4e0edc0539a04654f6ab77536a" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.649286 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.696786 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-ovsdbserver-nb\") pod \"a34c1dfd-830f-4688-9389-f78b80e4eac7\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.696862 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-dns-svc\") pod \"a34c1dfd-830f-4688-9389-f78b80e4eac7\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.696918 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjxql\" (UniqueName: \"kubernetes.io/projected/a34c1dfd-830f-4688-9389-f78b80e4eac7-kube-api-access-cjxql\") pod \"a34c1dfd-830f-4688-9389-f78b80e4eac7\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.696944 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-config\") pod \"a34c1dfd-830f-4688-9389-f78b80e4eac7\" (UID: \"a34c1dfd-830f-4688-9389-f78b80e4eac7\") " Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.710982 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34c1dfd-830f-4688-9389-f78b80e4eac7-kube-api-access-cjxql" (OuterVolumeSpecName: "kube-api-access-cjxql") pod "a34c1dfd-830f-4688-9389-f78b80e4eac7" (UID: "a34c1dfd-830f-4688-9389-f78b80e4eac7"). InnerVolumeSpecName "kube-api-access-cjxql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.802316 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjxql\" (UniqueName: \"kubernetes.io/projected/a34c1dfd-830f-4688-9389-f78b80e4eac7-kube-api-access-cjxql\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.807966 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a34c1dfd-830f-4688-9389-f78b80e4eac7" (UID: "a34c1dfd-830f-4688-9389-f78b80e4eac7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.810227 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-config" (OuterVolumeSpecName: "config") pod "a34c1dfd-830f-4688-9389-f78b80e4eac7" (UID: "a34c1dfd-830f-4688-9389-f78b80e4eac7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.821040 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a34c1dfd-830f-4688-9389-f78b80e4eac7" (UID: "a34c1dfd-830f-4688-9389-f78b80e4eac7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.905112 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.905138 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:28 crc kubenswrapper[4872]: I0203 06:17:28.905149 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34c1dfd-830f-4688-9389-f78b80e4eac7-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.038842 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3446-account-create-update-4x4hf"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.054311 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-h7nwv"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.063907 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m4mq8" event={"ID":"27de9be5-8c0c-4283-81ab-6ec3706d94c7","Type":"ContainerStarted","Data":"416508e8f33716280b0c0a8e57ee354f0575f00860e70fc7416b44edfdb88a32"} Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.069594 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" event={"ID":"a34c1dfd-830f-4688-9389-f78b80e4eac7","Type":"ContainerDied","Data":"0f214aa1f6ce7390e51def2fcaac6aacb319890cc9173686f22d2a728794c48e"} Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.069624 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-9bdmn" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.069654 4872 scope.go:117] "RemoveContainer" containerID="b542db1f2a7bec42b2c0ce16527825b2e903c2992e9a62960514d145279c2211" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.072969 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e1641148-8016-42db-879c-29e9e04666f3","Type":"ContainerStarted","Data":"96239e7d74205b30dca06f419ba1a958f3c4d2eb337ba04b382777cf5d604cff"} Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.103266 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-9bdmn"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.110854 4872 scope.go:117] "RemoveContainer" containerID="ae4b00d79401b82fe36e855571ef20f3cd14dd0af6141a959211590617ea26e0" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.121938 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-9bdmn"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.163773 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6t9pq"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.191398 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-29e3-account-create-update-dw7r4"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.417780 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-skqb9"] Feb 03 06:17:29 crc kubenswrapper[4872]: E0203 06:17:29.418086 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerName="dnsmasq-dns" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.418099 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerName="dnsmasq-dns" Feb 03 06:17:29 crc kubenswrapper[4872]: E0203 06:17:29.418133 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerName="init" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.418139 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerName="init" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.418282 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34c1dfd-830f-4688-9389-f78b80e4eac7" containerName="dnsmasq-dns" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.418958 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-skqb9" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.434832 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-skqb9"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.515064 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33115fbf-8226-4f10-8a4d-bb125f811922-operator-scripts\") pod \"glance-db-create-skqb9\" (UID: \"33115fbf-8226-4f10-8a4d-bb125f811922\") " pod="openstack/glance-db-create-skqb9" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.515279 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw4rl\" (UniqueName: \"kubernetes.io/projected/33115fbf-8226-4f10-8a4d-bb125f811922-kube-api-access-nw4rl\") pod \"glance-db-create-skqb9\" (UID: \"33115fbf-8226-4f10-8a4d-bb125f811922\") " pod="openstack/glance-db-create-skqb9" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.544199 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0620-account-create-update-5xrsx"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.545568 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.548516 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.575609 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0620-account-create-update-5xrsx"] Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.617130 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw4rl\" (UniqueName: \"kubernetes.io/projected/33115fbf-8226-4f10-8a4d-bb125f811922-kube-api-access-nw4rl\") pod \"glance-db-create-skqb9\" (UID: \"33115fbf-8226-4f10-8a4d-bb125f811922\") " pod="openstack/glance-db-create-skqb9" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.617283 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33115fbf-8226-4f10-8a4d-bb125f811922-operator-scripts\") pod \"glance-db-create-skqb9\" (UID: \"33115fbf-8226-4f10-8a4d-bb125f811922\") " pod="openstack/glance-db-create-skqb9" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.617317 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3feeddc-deae-4d30-a8c4-a8b7dec17966-operator-scripts\") pod \"glance-0620-account-create-update-5xrsx\" (UID: \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\") " pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.617377 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86r9d\" (UniqueName: \"kubernetes.io/projected/e3feeddc-deae-4d30-a8c4-a8b7dec17966-kube-api-access-86r9d\") pod \"glance-0620-account-create-update-5xrsx\" (UID: \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\") " pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.618538 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33115fbf-8226-4f10-8a4d-bb125f811922-operator-scripts\") pod \"glance-db-create-skqb9\" (UID: \"33115fbf-8226-4f10-8a4d-bb125f811922\") " pod="openstack/glance-db-create-skqb9" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.660496 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw4rl\" (UniqueName: \"kubernetes.io/projected/33115fbf-8226-4f10-8a4d-bb125f811922-kube-api-access-nw4rl\") pod \"glance-db-create-skqb9\" (UID: \"33115fbf-8226-4f10-8a4d-bb125f811922\") " pod="openstack/glance-db-create-skqb9" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.718361 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3feeddc-deae-4d30-a8c4-a8b7dec17966-operator-scripts\") pod \"glance-0620-account-create-update-5xrsx\" (UID: \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\") " pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.718447 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86r9d\" (UniqueName: \"kubernetes.io/projected/e3feeddc-deae-4d30-a8c4-a8b7dec17966-kube-api-access-86r9d\") pod \"glance-0620-account-create-update-5xrsx\" (UID: \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\") " pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.719110 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3feeddc-deae-4d30-a8c4-a8b7dec17966-operator-scripts\") pod \"glance-0620-account-create-update-5xrsx\" (UID: \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\") " pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.742211 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86r9d\" (UniqueName: \"kubernetes.io/projected/e3feeddc-deae-4d30-a8c4-a8b7dec17966-kube-api-access-86r9d\") pod \"glance-0620-account-create-update-5xrsx\" (UID: \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\") " pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.756048 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-skqb9" Feb 03 06:17:29 crc kubenswrapper[4872]: I0203 06:17:29.861403 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.085069 4872 generic.go:334] "Generic (PLEG): container finished" podID="b0b88e7c-6a76-4769-b83f-bba7810fa54a" containerID="873a680385b43b87ce2cb7f9c22baea0eab6e8cecc32f490547023326e67b2dd" exitCode=0 Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.085332 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-h7nwv" event={"ID":"b0b88e7c-6a76-4769-b83f-bba7810fa54a","Type":"ContainerDied","Data":"873a680385b43b87ce2cb7f9c22baea0eab6e8cecc32f490547023326e67b2dd"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.085753 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-h7nwv" event={"ID":"b0b88e7c-6a76-4769-b83f-bba7810fa54a","Type":"ContainerStarted","Data":"cf5b874ef7a31302d2f5c08aa979251593a451bb65d129ca490f7a370d1dfdc5"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.088333 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e1641148-8016-42db-879c-29e9e04666f3","Type":"ContainerStarted","Data":"d46e5958d37825a0a80b5ab5996e896de395ec225ff9162b53b3153be20fe884"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.088448 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.089322 4872 generic.go:334] "Generic (PLEG): container finished" podID="d288030f-c7b0-415f-a75c-710290cfcd38" containerID="75a8c2e29b24670928dadd9615ff04ca603b887e07b653c06dfabcadbfdd50f1" exitCode=0 Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.089370 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6t9pq" event={"ID":"d288030f-c7b0-415f-a75c-710290cfcd38","Type":"ContainerDied","Data":"75a8c2e29b24670928dadd9615ff04ca603b887e07b653c06dfabcadbfdd50f1"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.089387 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6t9pq" event={"ID":"d288030f-c7b0-415f-a75c-710290cfcd38","Type":"ContainerStarted","Data":"85c5e73e8c6760e93ba845f7befed1b7d42bab0b4e1fb609dc4ce6455606c97d"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.090579 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-29e3-account-create-update-dw7r4" event={"ID":"1c8616e4-d904-4f7c-9a38-d30207a53cb4","Type":"ContainerStarted","Data":"39ead5c97adb089a8fed07dfe75a31ee1d802b62b263dce72e715fb884b69da0"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.090629 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-29e3-account-create-update-dw7r4" event={"ID":"1c8616e4-d904-4f7c-9a38-d30207a53cb4","Type":"ContainerStarted","Data":"310638aaaf8c36dc213744bceca6b1e853d79aba265d468a0b4bf549990d78ce"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.096065 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3446-account-create-update-4x4hf" event={"ID":"d299733c-5ce6-4a78-b151-9c5e81026c42","Type":"ContainerStarted","Data":"2ac21265b035f972bdeb9d29e82d31136fb5474a8ddac2da050e60b60a7500a3"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.104234 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3446-account-create-update-4x4hf" event={"ID":"d299733c-5ce6-4a78-b151-9c5e81026c42","Type":"ContainerStarted","Data":"9f2bdb948520cdccae8be45c0529ca9706b853474fa89c10720a099ac8bd57f4"} Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.129588 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-29e3-account-create-update-dw7r4" podStartSLOduration=6.129572892 podStartE2EDuration="6.129572892s" podCreationTimestamp="2026-02-03 06:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:30.122746407 +0000 UTC m=+1020.705437821" watchObservedRunningTime="2026-02-03 06:17:30.129572892 +0000 UTC m=+1020.712264306" Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.147755 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.999461529 podStartE2EDuration="20.147739559s" podCreationTimestamp="2026-02-03 06:17:10 +0000 UTC" firstStartedPulling="2026-02-03 06:17:11.211000683 +0000 UTC m=+1001.793692097" lastFinishedPulling="2026-02-03 06:17:28.359278673 +0000 UTC m=+1018.941970127" observedRunningTime="2026-02-03 06:17:30.145442703 +0000 UTC m=+1020.728134117" watchObservedRunningTime="2026-02-03 06:17:30.147739559 +0000 UTC m=+1020.730430973" Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.154096 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34c1dfd-830f-4688-9389-f78b80e4eac7" path="/var/lib/kubelet/pods/a34c1dfd-830f-4688-9389-f78b80e4eac7/volumes" Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.174248 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-m4mq8" podStartSLOduration=2.564951713 podStartE2EDuration="19.174228566s" podCreationTimestamp="2026-02-03 06:17:11 +0000 UTC" firstStartedPulling="2026-02-03 06:17:12.02353861 +0000 UTC m=+1002.606230024" lastFinishedPulling="2026-02-03 06:17:28.632815463 +0000 UTC m=+1019.215506877" observedRunningTime="2026-02-03 06:17:30.171651314 +0000 UTC m=+1020.754342728" watchObservedRunningTime="2026-02-03 06:17:30.174228566 +0000 UTC m=+1020.756919980" Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.204904 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3446-account-create-update-4x4hf" podStartSLOduration=6.204888804 podStartE2EDuration="6.204888804s" podCreationTimestamp="2026-02-03 06:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:30.197528836 +0000 UTC m=+1020.780220250" watchObservedRunningTime="2026-02-03 06:17:30.204888804 +0000 UTC m=+1020.787580208" Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.302550 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-skqb9"] Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.401467 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0620-account-create-update-5xrsx"] Feb 03 06:17:30 crc kubenswrapper[4872]: W0203 06:17:30.435163 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3feeddc_deae_4d30_a8c4_a8b7dec17966.slice/crio-ce75774dfc6ee8e3dacf085bbfa1f5fda12df34fe1f3172f00b52577db6a712a WatchSource:0}: Error finding container ce75774dfc6ee8e3dacf085bbfa1f5fda12df34fe1f3172f00b52577db6a712a: Status 404 returned error can't find the container with id ce75774dfc6ee8e3dacf085bbfa1f5fda12df34fe1f3172f00b52577db6a712a Feb 03 06:17:30 crc kubenswrapper[4872]: I0203 06:17:30.439991 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.111493 4872 generic.go:334] "Generic (PLEG): container finished" podID="d299733c-5ce6-4a78-b151-9c5e81026c42" containerID="2ac21265b035f972bdeb9d29e82d31136fb5474a8ddac2da050e60b60a7500a3" exitCode=0 Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.111891 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3446-account-create-update-4x4hf" event={"ID":"d299733c-5ce6-4a78-b151-9c5e81026c42","Type":"ContainerDied","Data":"2ac21265b035f972bdeb9d29e82d31136fb5474a8ddac2da050e60b60a7500a3"} Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.113088 4872 generic.go:334] "Generic (PLEG): container finished" podID="33115fbf-8226-4f10-8a4d-bb125f811922" containerID="59d3f490f1a537bcba35db6012608020150bf267a12d5344fcbb8aa621ff837b" exitCode=0 Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.113154 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-skqb9" event={"ID":"33115fbf-8226-4f10-8a4d-bb125f811922","Type":"ContainerDied","Data":"59d3f490f1a537bcba35db6012608020150bf267a12d5344fcbb8aa621ff837b"} Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.113174 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-skqb9" event={"ID":"33115fbf-8226-4f10-8a4d-bb125f811922","Type":"ContainerStarted","Data":"b8251e366d4eb9aaf94efb5c3618d9ec0b53ef6d0f40165a6ed186bbdac1c53e"} Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.115188 4872 generic.go:334] "Generic (PLEG): container finished" podID="e3feeddc-deae-4d30-a8c4-a8b7dec17966" containerID="cdd5381e46e88564b6b1a5769d34f844eeb83c78856196f17823395c93eb15e2" exitCode=0 Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.115331 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0620-account-create-update-5xrsx" event={"ID":"e3feeddc-deae-4d30-a8c4-a8b7dec17966","Type":"ContainerDied","Data":"cdd5381e46e88564b6b1a5769d34f844eeb83c78856196f17823395c93eb15e2"} Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.115393 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0620-account-create-update-5xrsx" event={"ID":"e3feeddc-deae-4d30-a8c4-a8b7dec17966","Type":"ContainerStarted","Data":"ce75774dfc6ee8e3dacf085bbfa1f5fda12df34fe1f3172f00b52577db6a712a"} Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.117317 4872 generic.go:334] "Generic (PLEG): container finished" podID="1c8616e4-d904-4f7c-9a38-d30207a53cb4" containerID="39ead5c97adb089a8fed07dfe75a31ee1d802b62b263dce72e715fb884b69da0" exitCode=0 Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.117585 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-29e3-account-create-update-dw7r4" event={"ID":"1c8616e4-d904-4f7c-9a38-d30207a53cb4","Type":"ContainerDied","Data":"39ead5c97adb089a8fed07dfe75a31ee1d802b62b263dce72e715fb884b69da0"} Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.571395 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.576728 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-c8lsb"] Feb 03 06:17:31 crc kubenswrapper[4872]: E0203 06:17:31.577014 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b88e7c-6a76-4769-b83f-bba7810fa54a" containerName="mariadb-database-create" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.577026 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b88e7c-6a76-4769-b83f-bba7810fa54a" containerName="mariadb-database-create" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.577197 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b88e7c-6a76-4769-b83f-bba7810fa54a" containerName="mariadb-database-create" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.577679 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.578229 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.579661 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.591665 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c8lsb"] Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672006 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbhm\" (UniqueName: \"kubernetes.io/projected/d288030f-c7b0-415f-a75c-710290cfcd38-kube-api-access-dwbhm\") pod \"d288030f-c7b0-415f-a75c-710290cfcd38\" (UID: \"d288030f-c7b0-415f-a75c-710290cfcd38\") " Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672096 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xhzs\" (UniqueName: \"kubernetes.io/projected/b0b88e7c-6a76-4769-b83f-bba7810fa54a-kube-api-access-8xhzs\") pod \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\" (UID: \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\") " Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672137 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b88e7c-6a76-4769-b83f-bba7810fa54a-operator-scripts\") pod \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\" (UID: \"b0b88e7c-6a76-4769-b83f-bba7810fa54a\") " Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672204 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d288030f-c7b0-415f-a75c-710290cfcd38-operator-scripts\") pod \"d288030f-c7b0-415f-a75c-710290cfcd38\" (UID: \"d288030f-c7b0-415f-a75c-710290cfcd38\") " Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672566 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-operator-scripts\") pod \"root-account-create-update-c8lsb\" (UID: \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\") " pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672604 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0b88e7c-6a76-4769-b83f-bba7810fa54a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0b88e7c-6a76-4769-b83f-bba7810fa54a" (UID: "b0b88e7c-6a76-4769-b83f-bba7810fa54a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672635 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvx6j\" (UniqueName: \"kubernetes.io/projected/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-kube-api-access-wvx6j\") pod \"root-account-create-update-c8lsb\" (UID: \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\") " pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672717 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b88e7c-6a76-4769-b83f-bba7810fa54a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.672790 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d288030f-c7b0-415f-a75c-710290cfcd38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d288030f-c7b0-415f-a75c-710290cfcd38" (UID: "d288030f-c7b0-415f-a75c-710290cfcd38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.679682 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d288030f-c7b0-415f-a75c-710290cfcd38-kube-api-access-dwbhm" (OuterVolumeSpecName: "kube-api-access-dwbhm") pod "d288030f-c7b0-415f-a75c-710290cfcd38" (UID: "d288030f-c7b0-415f-a75c-710290cfcd38"). InnerVolumeSpecName "kube-api-access-dwbhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.691102 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b88e7c-6a76-4769-b83f-bba7810fa54a-kube-api-access-8xhzs" (OuterVolumeSpecName: "kube-api-access-8xhzs") pod "b0b88e7c-6a76-4769-b83f-bba7810fa54a" (UID: "b0b88e7c-6a76-4769-b83f-bba7810fa54a"). InnerVolumeSpecName "kube-api-access-8xhzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.774148 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvx6j\" (UniqueName: \"kubernetes.io/projected/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-kube-api-access-wvx6j\") pod \"root-account-create-update-c8lsb\" (UID: \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\") " pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.774449 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-operator-scripts\") pod \"root-account-create-update-c8lsb\" (UID: \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\") " pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.774558 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbhm\" (UniqueName: \"kubernetes.io/projected/d288030f-c7b0-415f-a75c-710290cfcd38-kube-api-access-dwbhm\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.774587 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xhzs\" (UniqueName: \"kubernetes.io/projected/b0b88e7c-6a76-4769-b83f-bba7810fa54a-kube-api-access-8xhzs\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.774607 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d288030f-c7b0-415f-a75c-710290cfcd38-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.775756 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-operator-scripts\") pod \"root-account-create-update-c8lsb\" (UID: \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\") " pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.798947 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvx6j\" (UniqueName: \"kubernetes.io/projected/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-kube-api-access-wvx6j\") pod \"root-account-create-update-c8lsb\" (UID: \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\") " pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.805950 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jt2w2" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="registry-server" probeResult="failure" output=< Feb 03 06:17:31 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:17:31 crc kubenswrapper[4872]: > Feb 03 06:17:31 crc kubenswrapper[4872]: I0203 06:17:31.896767 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.135268 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-h7nwv" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.139049 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6t9pq" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.145777 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-h7nwv" event={"ID":"b0b88e7c-6a76-4769-b83f-bba7810fa54a","Type":"ContainerDied","Data":"cf5b874ef7a31302d2f5c08aa979251593a451bb65d129ca490f7a370d1dfdc5"} Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.145811 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf5b874ef7a31302d2f5c08aa979251593a451bb65d129ca490f7a370d1dfdc5" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.145822 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6t9pq" event={"ID":"d288030f-c7b0-415f-a75c-710290cfcd38","Type":"ContainerDied","Data":"85c5e73e8c6760e93ba845f7befed1b7d42bab0b4e1fb609dc4ce6455606c97d"} Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.145830 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c5e73e8c6760e93ba845f7befed1b7d42bab0b4e1fb609dc4ce6455606c97d" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.386555 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-c8lsb"] Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.718915 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.725580 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.728819 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.736278 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-skqb9" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.794151 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7km7d\" (UniqueName: \"kubernetes.io/projected/d299733c-5ce6-4a78-b151-9c5e81026c42-kube-api-access-7km7d\") pod \"d299733c-5ce6-4a78-b151-9c5e81026c42\" (UID: \"d299733c-5ce6-4a78-b151-9c5e81026c42\") " Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.794446 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw4rl\" (UniqueName: \"kubernetes.io/projected/33115fbf-8226-4f10-8a4d-bb125f811922-kube-api-access-nw4rl\") pod \"33115fbf-8226-4f10-8a4d-bb125f811922\" (UID: \"33115fbf-8226-4f10-8a4d-bb125f811922\") " Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.794594 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86r9d\" (UniqueName: \"kubernetes.io/projected/e3feeddc-deae-4d30-a8c4-a8b7dec17966-kube-api-access-86r9d\") pod \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\" (UID: \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\") " Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.795168 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3feeddc-deae-4d30-a8c4-a8b7dec17966-operator-scripts\") pod \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\" (UID: \"e3feeddc-deae-4d30-a8c4-a8b7dec17966\") " Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.795267 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c8616e4-d904-4f7c-9a38-d30207a53cb4-operator-scripts\") pod \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\" (UID: \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\") " Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.795339 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djbpz\" (UniqueName: \"kubernetes.io/projected/1c8616e4-d904-4f7c-9a38-d30207a53cb4-kube-api-access-djbpz\") pod \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\" (UID: \"1c8616e4-d904-4f7c-9a38-d30207a53cb4\") " Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.795419 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33115fbf-8226-4f10-8a4d-bb125f811922-operator-scripts\") pod \"33115fbf-8226-4f10-8a4d-bb125f811922\" (UID: \"33115fbf-8226-4f10-8a4d-bb125f811922\") " Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.795514 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d299733c-5ce6-4a78-b151-9c5e81026c42-operator-scripts\") pod \"d299733c-5ce6-4a78-b151-9c5e81026c42\" (UID: \"d299733c-5ce6-4a78-b151-9c5e81026c42\") " Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.797004 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c8616e4-d904-4f7c-9a38-d30207a53cb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c8616e4-d904-4f7c-9a38-d30207a53cb4" (UID: "1c8616e4-d904-4f7c-9a38-d30207a53cb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.797282 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d299733c-5ce6-4a78-b151-9c5e81026c42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d299733c-5ce6-4a78-b151-9c5e81026c42" (UID: "d299733c-5ce6-4a78-b151-9c5e81026c42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.799345 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3feeddc-deae-4d30-a8c4-a8b7dec17966-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3feeddc-deae-4d30-a8c4-a8b7dec17966" (UID: "e3feeddc-deae-4d30-a8c4-a8b7dec17966"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.801189 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3feeddc-deae-4d30-a8c4-a8b7dec17966-kube-api-access-86r9d" (OuterVolumeSpecName: "kube-api-access-86r9d") pod "e3feeddc-deae-4d30-a8c4-a8b7dec17966" (UID: "e3feeddc-deae-4d30-a8c4-a8b7dec17966"). InnerVolumeSpecName "kube-api-access-86r9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.801251 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33115fbf-8226-4f10-8a4d-bb125f811922-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33115fbf-8226-4f10-8a4d-bb125f811922" (UID: "33115fbf-8226-4f10-8a4d-bb125f811922"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.801597 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33115fbf-8226-4f10-8a4d-bb125f811922-kube-api-access-nw4rl" (OuterVolumeSpecName: "kube-api-access-nw4rl") pod "33115fbf-8226-4f10-8a4d-bb125f811922" (UID: "33115fbf-8226-4f10-8a4d-bb125f811922"). InnerVolumeSpecName "kube-api-access-nw4rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.803779 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d299733c-5ce6-4a78-b151-9c5e81026c42-kube-api-access-7km7d" (OuterVolumeSpecName: "kube-api-access-7km7d") pod "d299733c-5ce6-4a78-b151-9c5e81026c42" (UID: "d299733c-5ce6-4a78-b151-9c5e81026c42"). InnerVolumeSpecName "kube-api-access-7km7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.804130 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8616e4-d904-4f7c-9a38-d30207a53cb4-kube-api-access-djbpz" (OuterVolumeSpecName: "kube-api-access-djbpz") pod "1c8616e4-d904-4f7c-9a38-d30207a53cb4" (UID: "1c8616e4-d904-4f7c-9a38-d30207a53cb4"). InnerVolumeSpecName "kube-api-access-djbpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.897630 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djbpz\" (UniqueName: \"kubernetes.io/projected/1c8616e4-d904-4f7c-9a38-d30207a53cb4-kube-api-access-djbpz\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.897661 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33115fbf-8226-4f10-8a4d-bb125f811922-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.897670 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d299733c-5ce6-4a78-b151-9c5e81026c42-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.897679 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7km7d\" (UniqueName: \"kubernetes.io/projected/d299733c-5ce6-4a78-b151-9c5e81026c42-kube-api-access-7km7d\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.897702 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw4rl\" (UniqueName: \"kubernetes.io/projected/33115fbf-8226-4f10-8a4d-bb125f811922-kube-api-access-nw4rl\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.897749 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86r9d\" (UniqueName: \"kubernetes.io/projected/e3feeddc-deae-4d30-a8c4-a8b7dec17966-kube-api-access-86r9d\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.897783 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3feeddc-deae-4d30-a8c4-a8b7dec17966-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:32 crc kubenswrapper[4872]: I0203 06:17:32.897794 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c8616e4-d904-4f7c-9a38-d30207a53cb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.148850 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3446-account-create-update-4x4hf" event={"ID":"d299733c-5ce6-4a78-b151-9c5e81026c42","Type":"ContainerDied","Data":"9f2bdb948520cdccae8be45c0529ca9706b853474fa89c10720a099ac8bd57f4"} Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.148893 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f2bdb948520cdccae8be45c0529ca9706b853474fa89c10720a099ac8bd57f4" Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.148953 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3446-account-create-update-4x4hf" Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.162061 4872 generic.go:334] "Generic (PLEG): container finished" podID="d9bc597d-56fc-4364-8f4f-bf283f1f2dda" containerID="ddda5ee77227aa4d67440d46c008c156c3ce814a47d57f08409d39a46ba59c08" exitCode=0 Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.162137 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8lsb" event={"ID":"d9bc597d-56fc-4364-8f4f-bf283f1f2dda","Type":"ContainerDied","Data":"ddda5ee77227aa4d67440d46c008c156c3ce814a47d57f08409d39a46ba59c08"} Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.162164 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8lsb" event={"ID":"d9bc597d-56fc-4364-8f4f-bf283f1f2dda","Type":"ContainerStarted","Data":"3d8f938f975e6bca679aff69ac4ce326237978bb910c9e093930a4a1c61e985d"} Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.163853 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-skqb9" event={"ID":"33115fbf-8226-4f10-8a4d-bb125f811922","Type":"ContainerDied","Data":"b8251e366d4eb9aaf94efb5c3618d9ec0b53ef6d0f40165a6ed186bbdac1c53e"} Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.163870 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-skqb9" Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.163880 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8251e366d4eb9aaf94efb5c3618d9ec0b53ef6d0f40165a6ed186bbdac1c53e" Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.165828 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0620-account-create-update-5xrsx" Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.165970 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0620-account-create-update-5xrsx" event={"ID":"e3feeddc-deae-4d30-a8c4-a8b7dec17966","Type":"ContainerDied","Data":"ce75774dfc6ee8e3dacf085bbfa1f5fda12df34fe1f3172f00b52577db6a712a"} Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.166082 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce75774dfc6ee8e3dacf085bbfa1f5fda12df34fe1f3172f00b52577db6a712a" Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.168010 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-29e3-account-create-update-dw7r4" event={"ID":"1c8616e4-d904-4f7c-9a38-d30207a53cb4","Type":"ContainerDied","Data":"310638aaaf8c36dc213744bceca6b1e853d79aba265d468a0b4bf549990d78ce"} Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.168133 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310638aaaf8c36dc213744bceca6b1e853d79aba265d468a0b4bf549990d78ce" Feb 03 06:17:33 crc kubenswrapper[4872]: I0203 06:17:33.168260 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-29e3-account-create-update-dw7r4" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.498415 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.624227 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-operator-scripts\") pod \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\" (UID: \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\") " Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.624396 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvx6j\" (UniqueName: \"kubernetes.io/projected/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-kube-api-access-wvx6j\") pod \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\" (UID: \"d9bc597d-56fc-4364-8f4f-bf283f1f2dda\") " Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.626146 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9bc597d-56fc-4364-8f4f-bf283f1f2dda" (UID: "d9bc597d-56fc-4364-8f4f-bf283f1f2dda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.649895 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-kube-api-access-wvx6j" (OuterVolumeSpecName: "kube-api-access-wvx6j") pod "d9bc597d-56fc-4364-8f4f-bf283f1f2dda" (UID: "d9bc597d-56fc-4364-8f4f-bf283f1f2dda"). InnerVolumeSpecName "kube-api-access-wvx6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.721407 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6j26n"] Feb 03 06:17:34 crc kubenswrapper[4872]: E0203 06:17:34.722023 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d288030f-c7b0-415f-a75c-710290cfcd38" containerName="mariadb-database-create" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722044 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d288030f-c7b0-415f-a75c-710290cfcd38" containerName="mariadb-database-create" Feb 03 06:17:34 crc kubenswrapper[4872]: E0203 06:17:34.722069 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8616e4-d904-4f7c-9a38-d30207a53cb4" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722078 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8616e4-d904-4f7c-9a38-d30207a53cb4" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: E0203 06:17:34.722093 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9bc597d-56fc-4364-8f4f-bf283f1f2dda" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722102 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9bc597d-56fc-4364-8f4f-bf283f1f2dda" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: E0203 06:17:34.722116 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3feeddc-deae-4d30-a8c4-a8b7dec17966" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722123 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3feeddc-deae-4d30-a8c4-a8b7dec17966" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: E0203 06:17:34.722135 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d299733c-5ce6-4a78-b151-9c5e81026c42" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722143 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d299733c-5ce6-4a78-b151-9c5e81026c42" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: E0203 06:17:34.722154 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33115fbf-8226-4f10-8a4d-bb125f811922" containerName="mariadb-database-create" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722162 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="33115fbf-8226-4f10-8a4d-bb125f811922" containerName="mariadb-database-create" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722337 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8616e4-d904-4f7c-9a38-d30207a53cb4" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722353 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="33115fbf-8226-4f10-8a4d-bb125f811922" containerName="mariadb-database-create" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722363 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d288030f-c7b0-415f-a75c-710290cfcd38" containerName="mariadb-database-create" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722373 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9bc597d-56fc-4364-8f4f-bf283f1f2dda" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722385 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d299733c-5ce6-4a78-b151-9c5e81026c42" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722397 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3feeddc-deae-4d30-a8c4-a8b7dec17966" containerName="mariadb-account-create-update" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.722924 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.725375 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.725706 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvx6j\" (UniqueName: \"kubernetes.io/projected/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-kube-api-access-wvx6j\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.725736 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9bc597d-56fc-4364-8f4f-bf283f1f2dda-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.725834 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xv67c" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.741331 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6j26n"] Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.827065 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk4h6\" (UniqueName: \"kubernetes.io/projected/5a290b0d-8a5a-426e-9561-9372ba41afb5-kube-api-access-pk4h6\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.827130 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-db-sync-config-data\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.827241 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-config-data\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.827272 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-combined-ca-bundle\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.929294 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-config-data\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.929411 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-combined-ca-bundle\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.930250 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk4h6\" (UniqueName: \"kubernetes.io/projected/5a290b0d-8a5a-426e-9561-9372ba41afb5-kube-api-access-pk4h6\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.930305 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-db-sync-config-data\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.933861 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-config-data\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.934947 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-db-sync-config-data\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.937316 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-combined-ca-bundle\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:34 crc kubenswrapper[4872]: I0203 06:17:34.968770 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk4h6\" (UniqueName: \"kubernetes.io/projected/5a290b0d-8a5a-426e-9561-9372ba41afb5-kube-api-access-pk4h6\") pod \"glance-db-sync-6j26n\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.043536 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6j26n" Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.196324 4872 generic.go:334] "Generic (PLEG): container finished" podID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerID="b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb" exitCode=0 Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.196407 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659","Type":"ContainerDied","Data":"b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb"} Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.204052 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-c8lsb" event={"ID":"d9bc597d-56fc-4364-8f4f-bf283f1f2dda","Type":"ContainerDied","Data":"3d8f938f975e6bca679aff69ac4ce326237978bb910c9e093930a4a1c61e985d"} Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.204087 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8f938f975e6bca679aff69ac4ce326237978bb910c9e093930a4a1c61e985d" Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.204090 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-c8lsb" Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.560284 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wjbc7" podUID="19908dab-b232-4cd8-b45b-079cebdee593" containerName="ovn-controller" probeResult="failure" output=< Feb 03 06:17:35 crc kubenswrapper[4872]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 03 06:17:35 crc kubenswrapper[4872]: > Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.627655 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.640032 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zbxs4" Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.717261 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6j26n"] Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.884805 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wjbc7-config-8z2f8"] Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.886017 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.891339 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 03 06:17:35 crc kubenswrapper[4872]: I0203 06:17:35.906604 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wjbc7-config-8z2f8"] Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.050953 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-additional-scripts\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.051304 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run-ovn\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.051412 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.051532 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cps28\" (UniqueName: \"kubernetes.io/projected/d2bc170c-2ba9-485b-864b-7e5843bbae3d-kube-api-access-cps28\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.051636 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-scripts\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.051751 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-log-ovn\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.153395 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-log-ovn\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.153454 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-additional-scripts\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.153537 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run-ovn\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.153558 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.153590 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cps28\" (UniqueName: \"kubernetes.io/projected/d2bc170c-2ba9-485b-864b-7e5843bbae3d-kube-api-access-cps28\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.153617 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-scripts\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.155344 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-scripts\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.155549 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-log-ovn\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.155591 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.156426 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-additional-scripts\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.156772 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run-ovn\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.192457 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cps28\" (UniqueName: \"kubernetes.io/projected/d2bc170c-2ba9-485b-864b-7e5843bbae3d-kube-api-access-cps28\") pod \"ovn-controller-wjbc7-config-8z2f8\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.208607 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.213550 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659","Type":"ContainerStarted","Data":"20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84"} Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.213786 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.214932 4872 generic.go:334] "Generic (PLEG): container finished" podID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerID="f04f5897ce704aabeaac3b226dd080524eb8f6014a4fa89c680a2b95c908d017" exitCode=0 Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.214981 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39c1e000-2f81-4251-a9b5-28563d87bb93","Type":"ContainerDied","Data":"f04f5897ce704aabeaac3b226dd080524eb8f6014a4fa89c680a2b95c908d017"} Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.222638 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6j26n" event={"ID":"5a290b0d-8a5a-426e-9561-9372ba41afb5","Type":"ContainerStarted","Data":"332c4df04ca1ad02bcf1d982e9c6327eaca3db106ebe7d8d888d9e65f2569d5b"} Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.256487 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.082780839 podStartE2EDuration="1m17.256469659s" podCreationTimestamp="2026-02-03 06:16:19 +0000 UTC" firstStartedPulling="2026-02-03 06:16:21.9279872 +0000 UTC m=+952.510678614" lastFinishedPulling="2026-02-03 06:17:01.10167602 +0000 UTC m=+991.684367434" observedRunningTime="2026-02-03 06:17:36.247282897 +0000 UTC m=+1026.829974331" watchObservedRunningTime="2026-02-03 06:17:36.256469659 +0000 UTC m=+1026.839161073" Feb 03 06:17:36 crc kubenswrapper[4872]: I0203 06:17:36.798514 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wjbc7-config-8z2f8"] Feb 03 06:17:37 crc kubenswrapper[4872]: I0203 06:17:37.232656 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wjbc7-config-8z2f8" event={"ID":"d2bc170c-2ba9-485b-864b-7e5843bbae3d","Type":"ContainerStarted","Data":"bccec67d9236bc38899976fa6562b3d6c9594b8a4b171fc5f07c10eac93d4f5d"} Feb 03 06:17:37 crc kubenswrapper[4872]: I0203 06:17:37.234956 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39c1e000-2f81-4251-a9b5-28563d87bb93","Type":"ContainerStarted","Data":"e151a3e574fcc4165fe498d721ea9e0b391cb3304402e14533f354c952fc43d9"} Feb 03 06:17:37 crc kubenswrapper[4872]: I0203 06:17:37.263709 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.60618805 podStartE2EDuration="1m17.263680689s" podCreationTimestamp="2026-02-03 06:16:20 +0000 UTC" firstStartedPulling="2026-02-03 06:16:22.233570082 +0000 UTC m=+952.816261496" lastFinishedPulling="2026-02-03 06:17:01.891062691 +0000 UTC m=+992.473754135" observedRunningTime="2026-02-03 06:17:37.259175731 +0000 UTC m=+1027.841867145" watchObservedRunningTime="2026-02-03 06:17:37.263680689 +0000 UTC m=+1027.846372103" Feb 03 06:17:37 crc kubenswrapper[4872]: I0203 06:17:37.522062 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-c8lsb"] Feb 03 06:17:37 crc kubenswrapper[4872]: I0203 06:17:37.528708 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-c8lsb"] Feb 03 06:17:38 crc kubenswrapper[4872]: I0203 06:17:38.132668 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9bc597d-56fc-4364-8f4f-bf283f1f2dda" path="/var/lib/kubelet/pods/d9bc597d-56fc-4364-8f4f-bf283f1f2dda/volumes" Feb 03 06:17:38 crc kubenswrapper[4872]: I0203 06:17:38.249283 4872 generic.go:334] "Generic (PLEG): container finished" podID="27de9be5-8c0c-4283-81ab-6ec3706d94c7" containerID="416508e8f33716280b0c0a8e57ee354f0575f00860e70fc7416b44edfdb88a32" exitCode=0 Feb 03 06:17:38 crc kubenswrapper[4872]: I0203 06:17:38.249369 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m4mq8" event={"ID":"27de9be5-8c0c-4283-81ab-6ec3706d94c7","Type":"ContainerDied","Data":"416508e8f33716280b0c0a8e57ee354f0575f00860e70fc7416b44edfdb88a32"} Feb 03 06:17:38 crc kubenswrapper[4872]: I0203 06:17:38.251515 4872 generic.go:334] "Generic (PLEG): container finished" podID="d2bc170c-2ba9-485b-864b-7e5843bbae3d" containerID="c45601ce1502c204820a1135b992d3ab8d19e29c6292b60de91cb0da73254999" exitCode=0 Feb 03 06:17:38 crc kubenswrapper[4872]: I0203 06:17:38.251557 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wjbc7-config-8z2f8" event={"ID":"d2bc170c-2ba9-485b-864b-7e5843bbae3d","Type":"ContainerDied","Data":"c45601ce1502c204820a1135b992d3ab8d19e29c6292b60de91cb0da73254999"} Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.407398 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.430573 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53916dd7-8beb-48bb-8689-5693b2b3cf6f-etc-swift\") pod \"swift-storage-0\" (UID: \"53916dd7-8beb-48bb-8689-5693b2b3cf6f\") " pod="openstack/swift-storage-0" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.615884 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.693812 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.698916 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.812586 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-combined-ca-bundle\") pod \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.812873 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-swiftconf\") pod \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.812894 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run\") pod \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.812954 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-dispersionconf\") pod \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.812987 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cps28\" (UniqueName: \"kubernetes.io/projected/d2bc170c-2ba9-485b-864b-7e5843bbae3d-kube-api-access-cps28\") pod \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813051 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-log-ovn\") pod \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813095 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27de9be5-8c0c-4283-81ab-6ec3706d94c7-etc-swift\") pod \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813118 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run-ovn\") pod \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813172 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-additional-scripts\") pod \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813200 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-ring-data-devices\") pod \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813264 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d2bc170c-2ba9-485b-864b-7e5843bbae3d" (UID: "d2bc170c-2ba9-485b-864b-7e5843bbae3d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813291 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run" (OuterVolumeSpecName: "var-run") pod "d2bc170c-2ba9-485b-864b-7e5843bbae3d" (UID: "d2bc170c-2ba9-485b-864b-7e5843bbae3d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813656 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-scripts\") pod \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813732 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-scripts\") pod \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\" (UID: \"d2bc170c-2ba9-485b-864b-7e5843bbae3d\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.813760 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phnwn\" (UniqueName: \"kubernetes.io/projected/27de9be5-8c0c-4283-81ab-6ec3706d94c7-kube-api-access-phnwn\") pod \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\" (UID: \"27de9be5-8c0c-4283-81ab-6ec3706d94c7\") " Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.815430 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d2bc170c-2ba9-485b-864b-7e5843bbae3d" (UID: "d2bc170c-2ba9-485b-864b-7e5843bbae3d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.815512 4872 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.815526 4872 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.815584 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d2bc170c-2ba9-485b-864b-7e5843bbae3d" (UID: "d2bc170c-2ba9-485b-864b-7e5843bbae3d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.815715 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27de9be5-8c0c-4283-81ab-6ec3706d94c7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "27de9be5-8c0c-4283-81ab-6ec3706d94c7" (UID: "27de9be5-8c0c-4283-81ab-6ec3706d94c7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.815915 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "27de9be5-8c0c-4283-81ab-6ec3706d94c7" (UID: "27de9be5-8c0c-4283-81ab-6ec3706d94c7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.818520 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-scripts" (OuterVolumeSpecName: "scripts") pod "d2bc170c-2ba9-485b-864b-7e5843bbae3d" (UID: "d2bc170c-2ba9-485b-864b-7e5843bbae3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.822760 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27de9be5-8c0c-4283-81ab-6ec3706d94c7-kube-api-access-phnwn" (OuterVolumeSpecName: "kube-api-access-phnwn") pod "27de9be5-8c0c-4283-81ab-6ec3706d94c7" (UID: "27de9be5-8c0c-4283-81ab-6ec3706d94c7"). InnerVolumeSpecName "kube-api-access-phnwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.837995 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-scripts" (OuterVolumeSpecName: "scripts") pod "27de9be5-8c0c-4283-81ab-6ec3706d94c7" (UID: "27de9be5-8c0c-4283-81ab-6ec3706d94c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.838810 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "27de9be5-8c0c-4283-81ab-6ec3706d94c7" (UID: "27de9be5-8c0c-4283-81ab-6ec3706d94c7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.839442 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2bc170c-2ba9-485b-864b-7e5843bbae3d-kube-api-access-cps28" (OuterVolumeSpecName: "kube-api-access-cps28") pod "d2bc170c-2ba9-485b-864b-7e5843bbae3d" (UID: "d2bc170c-2ba9-485b-864b-7e5843bbae3d"). InnerVolumeSpecName "kube-api-access-cps28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.848291 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "27de9be5-8c0c-4283-81ab-6ec3706d94c7" (UID: "27de9be5-8c0c-4283-81ab-6ec3706d94c7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.854316 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27de9be5-8c0c-4283-81ab-6ec3706d94c7" (UID: "27de9be5-8c0c-4283-81ab-6ec3706d94c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918104 4872 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918142 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918155 4872 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27de9be5-8c0c-4283-81ab-6ec3706d94c7-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918167 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cps28\" (UniqueName: \"kubernetes.io/projected/d2bc170c-2ba9-485b-864b-7e5843bbae3d-kube-api-access-cps28\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918182 4872 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27de9be5-8c0c-4283-81ab-6ec3706d94c7-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918192 4872 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d2bc170c-2ba9-485b-864b-7e5843bbae3d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918204 4872 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918216 4872 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918227 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27de9be5-8c0c-4283-81ab-6ec3706d94c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918238 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d2bc170c-2ba9-485b-864b-7e5843bbae3d-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:39 crc kubenswrapper[4872]: I0203 06:17:39.918250 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phnwn\" (UniqueName: \"kubernetes.io/projected/27de9be5-8c0c-4283-81ab-6ec3706d94c7-kube-api-access-phnwn\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.051807 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 06:17:40 crc kubenswrapper[4872]: W0203 06:17:40.074662 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53916dd7_8beb_48bb_8689_5693b2b3cf6f.slice/crio-ea52acc45c5f955f34f3281ce74255c4b480463a0ca63b3fd0b7caa95076d80c WatchSource:0}: Error finding container ea52acc45c5f955f34f3281ce74255c4b480463a0ca63b3fd0b7caa95076d80c: Status 404 returned error can't find the container with id ea52acc45c5f955f34f3281ce74255c4b480463a0ca63b3fd0b7caa95076d80c Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.268848 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-m4mq8" event={"ID":"27de9be5-8c0c-4283-81ab-6ec3706d94c7","Type":"ContainerDied","Data":"8f6f6d529b54c8b1ac33a3989e08581988ce9212d8fbcdbf64c356406f7d2cf6"} Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.268891 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f6f6d529b54c8b1ac33a3989e08581988ce9212d8fbcdbf64c356406f7d2cf6" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.268956 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m4mq8" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.271918 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wjbc7-config-8z2f8" event={"ID":"d2bc170c-2ba9-485b-864b-7e5843bbae3d","Type":"ContainerDied","Data":"bccec67d9236bc38899976fa6562b3d6c9594b8a4b171fc5f07c10eac93d4f5d"} Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.271952 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bccec67d9236bc38899976fa6562b3d6c9594b8a4b171fc5f07c10eac93d4f5d" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.272017 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7-config-8z2f8" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.274678 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"ea52acc45c5f955f34f3281ce74255c4b480463a0ca63b3fd0b7caa95076d80c"} Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.528671 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wjbc7" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.897797 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wjbc7-config-8z2f8"] Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.903728 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wjbc7-config-8z2f8"] Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.915462 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.995757 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wjbc7-config-bsxs8"] Feb 03 06:17:40 crc kubenswrapper[4872]: E0203 06:17:40.996786 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27de9be5-8c0c-4283-81ab-6ec3706d94c7" containerName="swift-ring-rebalance" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.996803 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="27de9be5-8c0c-4283-81ab-6ec3706d94c7" containerName="swift-ring-rebalance" Feb 03 06:17:40 crc kubenswrapper[4872]: E0203 06:17:40.996843 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2bc170c-2ba9-485b-864b-7e5843bbae3d" containerName="ovn-config" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.996850 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2bc170c-2ba9-485b-864b-7e5843bbae3d" containerName="ovn-config" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.997156 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="27de9be5-8c0c-4283-81ab-6ec3706d94c7" containerName="swift-ring-rebalance" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.997179 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2bc170c-2ba9-485b-864b-7e5843bbae3d" containerName="ovn-config" Feb 03 06:17:40 crc kubenswrapper[4872]: I0203 06:17:40.997878 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.019596 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.019876 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.036508 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wjbc7-config-bsxs8"] Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.051334 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.148740 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc6tn\" (UniqueName: \"kubernetes.io/projected/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-kube-api-access-qc6tn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.148792 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run-ovn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.148846 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-additional-scripts\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.148889 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-scripts\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.148983 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-log-ovn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.149002 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.169168 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jt2w2"] Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.250996 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-additional-scripts\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.251077 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-scripts\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.251208 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-log-ovn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.251235 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.251505 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc6tn\" (UniqueName: \"kubernetes.io/projected/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-kube-api-access-qc6tn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.251582 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run-ovn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.251994 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run-ovn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.252061 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-log-ovn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.252107 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.252920 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-additional-scripts\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.255282 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-scripts\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.272553 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc6tn\" (UniqueName: \"kubernetes.io/projected/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-kube-api-access-qc6tn\") pod \"ovn-controller-wjbc7-config-bsxs8\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.349308 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.629157 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:17:41 crc kubenswrapper[4872]: I0203 06:17:41.888285 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wjbc7-config-bsxs8"] Feb 03 06:17:42 crc kubenswrapper[4872]: W0203 06:17:42.006546 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebc9cda7_10ba_4fd4_94d8_1ceb547d9f9a.slice/crio-d1c56a32ffc680eaf653ba03020fea085e71916abf2f59b5beffc634d3d1ab6b WatchSource:0}: Error finding container d1c56a32ffc680eaf653ba03020fea085e71916abf2f59b5beffc634d3d1ab6b: Status 404 returned error can't find the container with id d1c56a32ffc680eaf653ba03020fea085e71916abf2f59b5beffc634d3d1ab6b Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.140651 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2bc170c-2ba9-485b-864b-7e5843bbae3d" path="/var/lib/kubelet/pods/d2bc170c-2ba9-485b-864b-7e5843bbae3d/volumes" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.305731 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wjbc7-config-bsxs8" event={"ID":"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a","Type":"ContainerStarted","Data":"d1c56a32ffc680eaf653ba03020fea085e71916abf2f59b5beffc634d3d1ab6b"} Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.305849 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jt2w2" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="registry-server" containerID="cri-o://9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6" gracePeriod=2 Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.521505 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7rrj2"] Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.526217 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.528553 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.542185 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7rrj2"] Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.679193 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d48c0f3-0353-4e0d-a5af-089233c0ab65-operator-scripts\") pod \"root-account-create-update-7rrj2\" (UID: \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\") " pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.679510 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9rp\" (UniqueName: \"kubernetes.io/projected/7d48c0f3-0353-4e0d-a5af-089233c0ab65-kube-api-access-fs9rp\") pod \"root-account-create-update-7rrj2\" (UID: \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\") " pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.781273 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d48c0f3-0353-4e0d-a5af-089233c0ab65-operator-scripts\") pod \"root-account-create-update-7rrj2\" (UID: \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\") " pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.781343 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs9rp\" (UniqueName: \"kubernetes.io/projected/7d48c0f3-0353-4e0d-a5af-089233c0ab65-kube-api-access-fs9rp\") pod \"root-account-create-update-7rrj2\" (UID: \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\") " pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.782105 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d48c0f3-0353-4e0d-a5af-089233c0ab65-operator-scripts\") pod \"root-account-create-update-7rrj2\" (UID: \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\") " pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.798479 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs9rp\" (UniqueName: \"kubernetes.io/projected/7d48c0f3-0353-4e0d-a5af-089233c0ab65-kube-api-access-fs9rp\") pod \"root-account-create-update-7rrj2\" (UID: \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\") " pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.844375 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.879640 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.983999 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-catalog-content\") pod \"d7c431d4-8bd4-456f-b697-4a62642afea1\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.984468 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7sgr\" (UniqueName: \"kubernetes.io/projected/d7c431d4-8bd4-456f-b697-4a62642afea1-kube-api-access-f7sgr\") pod \"d7c431d4-8bd4-456f-b697-4a62642afea1\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.984563 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-utilities\") pod \"d7c431d4-8bd4-456f-b697-4a62642afea1\" (UID: \"d7c431d4-8bd4-456f-b697-4a62642afea1\") " Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.991165 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-utilities" (OuterVolumeSpecName: "utilities") pod "d7c431d4-8bd4-456f-b697-4a62642afea1" (UID: "d7c431d4-8bd4-456f-b697-4a62642afea1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:17:42 crc kubenswrapper[4872]: I0203 06:17:42.991839 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c431d4-8bd4-456f-b697-4a62642afea1-kube-api-access-f7sgr" (OuterVolumeSpecName: "kube-api-access-f7sgr") pod "d7c431d4-8bd4-456f-b697-4a62642afea1" (UID: "d7c431d4-8bd4-456f-b697-4a62642afea1"). InnerVolumeSpecName "kube-api-access-f7sgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.086720 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7sgr\" (UniqueName: \"kubernetes.io/projected/d7c431d4-8bd4-456f-b697-4a62642afea1-kube-api-access-f7sgr\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.086750 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.127879 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7c431d4-8bd4-456f-b697-4a62642afea1" (UID: "d7c431d4-8bd4-456f-b697-4a62642afea1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.187577 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c431d4-8bd4-456f-b697-4a62642afea1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.190592 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7rrj2"] Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.315933 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"f88ae4eb7939aade58e0384af0bfd3edf8425a2a92035dfdd646ec01dcec4b71"} Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.315996 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"ef00b1bdda791360e7de5c7cc7280f3624957b94f69089e76a03a61bc66b3955"} Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.317366 4872 generic.go:334] "Generic (PLEG): container finished" podID="ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" containerID="15a673896b55e960cd90aec59634a964d48c5217b5c5f168b8568f222659596a" exitCode=0 Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.317441 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wjbc7-config-bsxs8" event={"ID":"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a","Type":"ContainerDied","Data":"15a673896b55e960cd90aec59634a964d48c5217b5c5f168b8568f222659596a"} Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.320102 4872 generic.go:334] "Generic (PLEG): container finished" podID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerID="9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6" exitCode=0 Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.320164 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2w2" event={"ID":"d7c431d4-8bd4-456f-b697-4a62642afea1","Type":"ContainerDied","Data":"9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6"} Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.320178 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt2w2" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.320190 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2w2" event={"ID":"d7c431d4-8bd4-456f-b697-4a62642afea1","Type":"ContainerDied","Data":"f61fc283b1df0bc4773442311e074343740386138a97be9b76aafe732296701b"} Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.320210 4872 scope.go:117] "RemoveContainer" containerID="9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.321140 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7rrj2" event={"ID":"7d48c0f3-0353-4e0d-a5af-089233c0ab65","Type":"ContainerStarted","Data":"47597873ab6c8c2cbd300c320cdba67559df5c88f3df6365d53488b41c325f11"} Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.343045 4872 scope.go:117] "RemoveContainer" containerID="f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.367039 4872 scope.go:117] "RemoveContainer" containerID="21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.386484 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jt2w2"] Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.389596 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jt2w2"] Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.391129 4872 scope.go:117] "RemoveContainer" containerID="9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6" Feb 03 06:17:43 crc kubenswrapper[4872]: E0203 06:17:43.391647 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6\": container with ID starting with 9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6 not found: ID does not exist" containerID="9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.391745 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6"} err="failed to get container status \"9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6\": rpc error: code = NotFound desc = could not find container \"9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6\": container with ID starting with 9faf9ed548b5edb3779a10753f98daff73253013c8e8ba42375b18e95da844c6 not found: ID does not exist" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.391774 4872 scope.go:117] "RemoveContainer" containerID="f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf" Feb 03 06:17:43 crc kubenswrapper[4872]: E0203 06:17:43.392137 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf\": container with ID starting with f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf not found: ID does not exist" containerID="f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.392156 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf"} err="failed to get container status \"f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf\": rpc error: code = NotFound desc = could not find container \"f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf\": container with ID starting with f0333e2c1c9ce331c5ca0fe15dc9054e7c52056433c428485ee1480181392fdf not found: ID does not exist" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.392171 4872 scope.go:117] "RemoveContainer" containerID="21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175" Feb 03 06:17:43 crc kubenswrapper[4872]: E0203 06:17:43.392522 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175\": container with ID starting with 21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175 not found: ID does not exist" containerID="21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175" Feb 03 06:17:43 crc kubenswrapper[4872]: I0203 06:17:43.392555 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175"} err="failed to get container status \"21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175\": rpc error: code = NotFound desc = could not find container \"21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175\": container with ID starting with 21abb02baf5039b4d8d51ea40ecf4485e92ee01516b5aff09e048c4dcd09f175 not found: ID does not exist" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.135593 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" path="/var/lib/kubelet/pods/d7c431d4-8bd4-456f-b697-4a62642afea1/volumes" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.331342 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7rrj2" event={"ID":"7d48c0f3-0353-4e0d-a5af-089233c0ab65","Type":"ContainerStarted","Data":"c086e84649ad4be8c51596273cc319f77b31ce662e1befb6ec44377821d1f9e2"} Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.334544 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"13c53d96b8a6deb8efa6beae1ac33d61d1441f093087ae6cb2053f407dccfa1e"} Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.334577 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"105f79bce93095eaa99ed1bb6731ee7ef59dfc07f81ba5d5b4471e48c8e1d63a"} Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.354160 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-7rrj2" podStartSLOduration=2.3541441770000002 podStartE2EDuration="2.354144177s" podCreationTimestamp="2026-02-03 06:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:44.347405964 +0000 UTC m=+1034.930097378" watchObservedRunningTime="2026-02-03 06:17:44.354144177 +0000 UTC m=+1034.936835591" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.640293 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.824857 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc6tn\" (UniqueName: \"kubernetes.io/projected/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-kube-api-access-qc6tn\") pod \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.825481 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-scripts\") pod \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.825527 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-log-ovn\") pod \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.825543 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run-ovn\") pod \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.825578 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-additional-scripts\") pod \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.825602 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run\") pod \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\" (UID: \"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a\") " Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.825783 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" (UID: "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.825847 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run" (OuterVolumeSpecName: "var-run") pod "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" (UID: "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.825884 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" (UID: "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.826431 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" (UID: "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.826612 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-scripts" (OuterVolumeSpecName: "scripts") pod "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" (UID: "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.830851 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-kube-api-access-qc6tn" (OuterVolumeSpecName: "kube-api-access-qc6tn") pod "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" (UID: "ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a"). InnerVolumeSpecName "kube-api-access-qc6tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.927707 4872 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.927970 4872 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.927980 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc6tn\" (UniqueName: \"kubernetes.io/projected/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-kube-api-access-qc6tn\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.927990 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.927998 4872 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:44 crc kubenswrapper[4872]: I0203 06:17:44.928007 4872 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:45 crc kubenswrapper[4872]: I0203 06:17:45.343759 4872 generic.go:334] "Generic (PLEG): container finished" podID="7d48c0f3-0353-4e0d-a5af-089233c0ab65" containerID="c086e84649ad4be8c51596273cc319f77b31ce662e1befb6ec44377821d1f9e2" exitCode=0 Feb 03 06:17:45 crc kubenswrapper[4872]: I0203 06:17:45.343795 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7rrj2" event={"ID":"7d48c0f3-0353-4e0d-a5af-089233c0ab65","Type":"ContainerDied","Data":"c086e84649ad4be8c51596273cc319f77b31ce662e1befb6ec44377821d1f9e2"} Feb 03 06:17:45 crc kubenswrapper[4872]: I0203 06:17:45.347503 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wjbc7-config-bsxs8" event={"ID":"ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a","Type":"ContainerDied","Data":"d1c56a32ffc680eaf653ba03020fea085e71916abf2f59b5beffc634d3d1ab6b"} Feb 03 06:17:45 crc kubenswrapper[4872]: I0203 06:17:45.347531 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c56a32ffc680eaf653ba03020fea085e71916abf2f59b5beffc634d3d1ab6b" Feb 03 06:17:45 crc kubenswrapper[4872]: I0203 06:17:45.347595 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wjbc7-config-bsxs8" Feb 03 06:17:45 crc kubenswrapper[4872]: I0203 06:17:45.713656 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wjbc7-config-bsxs8"] Feb 03 06:17:45 crc kubenswrapper[4872]: I0203 06:17:45.721557 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wjbc7-config-bsxs8"] Feb 03 06:17:46 crc kubenswrapper[4872]: I0203 06:17:46.131339 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" path="/var/lib/kubelet/pods/ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a/volumes" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.235942 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.575447 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dvnrz"] Feb 03 06:17:51 crc kubenswrapper[4872]: E0203 06:17:51.575772 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" containerName="ovn-config" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.575789 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" containerName="ovn-config" Feb 03 06:17:51 crc kubenswrapper[4872]: E0203 06:17:51.575798 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="extract-utilities" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.575805 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="extract-utilities" Feb 03 06:17:51 crc kubenswrapper[4872]: E0203 06:17:51.575819 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="extract-content" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.575825 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="extract-content" Feb 03 06:17:51 crc kubenswrapper[4872]: E0203 06:17:51.575836 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="registry-server" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.575841 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="registry-server" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.576009 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc9cda7-10ba-4fd4-94d8-1ceb547d9f9a" containerName="ovn-config" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.576031 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c431d4-8bd4-456f-b697-4a62642afea1" containerName="registry-server" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.576501 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvnrz" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.604228 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dvnrz"] Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.635202 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.642795 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whbhj\" (UniqueName: \"kubernetes.io/projected/953773a5-8de7-4b75-9e0b-a0effcbb297c-kube-api-access-whbhj\") pod \"cinder-db-create-dvnrz\" (UID: \"953773a5-8de7-4b75-9e0b-a0effcbb297c\") " pod="openstack/cinder-db-create-dvnrz" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.642827 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953773a5-8de7-4b75-9e0b-a0effcbb297c-operator-scripts\") pod \"cinder-db-create-dvnrz\" (UID: \"953773a5-8de7-4b75-9e0b-a0effcbb297c\") " pod="openstack/cinder-db-create-dvnrz" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.709165 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-v7btd"] Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.710328 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7btd" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.734001 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v7btd"] Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.756566 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whbhj\" (UniqueName: \"kubernetes.io/projected/953773a5-8de7-4b75-9e0b-a0effcbb297c-kube-api-access-whbhj\") pod \"cinder-db-create-dvnrz\" (UID: \"953773a5-8de7-4b75-9e0b-a0effcbb297c\") " pod="openstack/cinder-db-create-dvnrz" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.756632 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953773a5-8de7-4b75-9e0b-a0effcbb297c-operator-scripts\") pod \"cinder-db-create-dvnrz\" (UID: \"953773a5-8de7-4b75-9e0b-a0effcbb297c\") " pod="openstack/cinder-db-create-dvnrz" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.757779 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953773a5-8de7-4b75-9e0b-a0effcbb297c-operator-scripts\") pod \"cinder-db-create-dvnrz\" (UID: \"953773a5-8de7-4b75-9e0b-a0effcbb297c\") " pod="openstack/cinder-db-create-dvnrz" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.780128 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whbhj\" (UniqueName: \"kubernetes.io/projected/953773a5-8de7-4b75-9e0b-a0effcbb297c-kube-api-access-whbhj\") pod \"cinder-db-create-dvnrz\" (UID: \"953773a5-8de7-4b75-9e0b-a0effcbb297c\") " pod="openstack/cinder-db-create-dvnrz" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.800185 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-daec-account-create-update-kb2fr"] Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.801409 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.813667 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.818771 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-daec-account-create-update-kb2fr"] Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.860246 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsltd\" (UniqueName: \"kubernetes.io/projected/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-kube-api-access-nsltd\") pod \"barbican-db-create-v7btd\" (UID: \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\") " pod="openstack/barbican-db-create-v7btd" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.860302 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-operator-scripts\") pod \"barbican-db-create-v7btd\" (UID: \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\") " pod="openstack/barbican-db-create-v7btd" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.893140 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvnrz" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.961545 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-operator-scripts\") pod \"barbican-db-create-v7btd\" (UID: \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\") " pod="openstack/barbican-db-create-v7btd" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.961630 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897f2bae-39b3-4862-b0a9-d9652b593e98-operator-scripts\") pod \"barbican-daec-account-create-update-kb2fr\" (UID: \"897f2bae-39b3-4862-b0a9-d9652b593e98\") " pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.961661 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnnw\" (UniqueName: \"kubernetes.io/projected/897f2bae-39b3-4862-b0a9-d9652b593e98-kube-api-access-pjnnw\") pod \"barbican-daec-account-create-update-kb2fr\" (UID: \"897f2bae-39b3-4862-b0a9-d9652b593e98\") " pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.961969 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsltd\" (UniqueName: \"kubernetes.io/projected/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-kube-api-access-nsltd\") pod \"barbican-db-create-v7btd\" (UID: \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\") " pod="openstack/barbican-db-create-v7btd" Feb 03 06:17:51 crc kubenswrapper[4872]: I0203 06:17:51.962939 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-operator-scripts\") pod \"barbican-db-create-v7btd\" (UID: \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\") " pod="openstack/barbican-db-create-v7btd" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.006441 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsltd\" (UniqueName: \"kubernetes.io/projected/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-kube-api-access-nsltd\") pod \"barbican-db-create-v7btd\" (UID: \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\") " pod="openstack/barbican-db-create-v7btd" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.019020 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nddft"] Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.020245 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nddft" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.030167 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7btd" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.034743 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-767c2"] Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.035835 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.038981 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.039237 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.039324 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mb5hw" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.039355 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.053798 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a758-account-create-update-cjr6t"] Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.054813 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.063019 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897f2bae-39b3-4862-b0a9-d9652b593e98-operator-scripts\") pod \"barbican-daec-account-create-update-kb2fr\" (UID: \"897f2bae-39b3-4862-b0a9-d9652b593e98\") " pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.063138 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjnnw\" (UniqueName: \"kubernetes.io/projected/897f2bae-39b3-4862-b0a9-d9652b593e98-kube-api-access-pjnnw\") pod \"barbican-daec-account-create-update-kb2fr\" (UID: \"897f2bae-39b3-4862-b0a9-d9652b593e98\") " pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.063740 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897f2bae-39b3-4862-b0a9-d9652b593e98-operator-scripts\") pod \"barbican-daec-account-create-update-kb2fr\" (UID: \"897f2bae-39b3-4862-b0a9-d9652b593e98\") " pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.065864 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nddft"] Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.075235 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.092357 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjnnw\" (UniqueName: \"kubernetes.io/projected/897f2bae-39b3-4862-b0a9-d9652b593e98-kube-api-access-pjnnw\") pod \"barbican-daec-account-create-update-kb2fr\" (UID: \"897f2bae-39b3-4862-b0a9-d9652b593e98\") " pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.097726 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-767c2"] Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.133446 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.138363 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a758-account-create-update-cjr6t"] Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.166151 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrs6v\" (UniqueName: \"kubernetes.io/projected/02f4db43-1e9b-4fb5-8781-aeb301bc1298-kube-api-access-mrs6v\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.166208 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvr6j\" (UniqueName: \"kubernetes.io/projected/6abead70-db1e-4e93-9e76-214427aa519a-kube-api-access-gvr6j\") pod \"neutron-db-create-nddft\" (UID: \"6abead70-db1e-4e93-9e76-214427aa519a\") " pod="openstack/neutron-db-create-nddft" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.166338 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33dd1ede-3c66-4876-8ada-0ed81db2d705-operator-scripts\") pod \"cinder-a758-account-create-update-cjr6t\" (UID: \"33dd1ede-3c66-4876-8ada-0ed81db2d705\") " pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.166407 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxqzj\" (UniqueName: \"kubernetes.io/projected/33dd1ede-3c66-4876-8ada-0ed81db2d705-kube-api-access-cxqzj\") pod \"cinder-a758-account-create-update-cjr6t\" (UID: \"33dd1ede-3c66-4876-8ada-0ed81db2d705\") " pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.166502 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-combined-ca-bundle\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.166579 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-config-data\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.166667 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abead70-db1e-4e93-9e76-214427aa519a-operator-scripts\") pod \"neutron-db-create-nddft\" (UID: \"6abead70-db1e-4e93-9e76-214427aa519a\") " pod="openstack/neutron-db-create-nddft" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.267715 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxqzj\" (UniqueName: \"kubernetes.io/projected/33dd1ede-3c66-4876-8ada-0ed81db2d705-kube-api-access-cxqzj\") pod \"cinder-a758-account-create-update-cjr6t\" (UID: \"33dd1ede-3c66-4876-8ada-0ed81db2d705\") " pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.267817 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-combined-ca-bundle\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.267859 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-config-data\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.267911 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abead70-db1e-4e93-9e76-214427aa519a-operator-scripts\") pod \"neutron-db-create-nddft\" (UID: \"6abead70-db1e-4e93-9e76-214427aa519a\") " pod="openstack/neutron-db-create-nddft" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.267938 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrs6v\" (UniqueName: \"kubernetes.io/projected/02f4db43-1e9b-4fb5-8781-aeb301bc1298-kube-api-access-mrs6v\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.267966 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvr6j\" (UniqueName: \"kubernetes.io/projected/6abead70-db1e-4e93-9e76-214427aa519a-kube-api-access-gvr6j\") pod \"neutron-db-create-nddft\" (UID: \"6abead70-db1e-4e93-9e76-214427aa519a\") " pod="openstack/neutron-db-create-nddft" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.268010 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33dd1ede-3c66-4876-8ada-0ed81db2d705-operator-scripts\") pod \"cinder-a758-account-create-update-cjr6t\" (UID: \"33dd1ede-3c66-4876-8ada-0ed81db2d705\") " pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.268630 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33dd1ede-3c66-4876-8ada-0ed81db2d705-operator-scripts\") pod \"cinder-a758-account-create-update-cjr6t\" (UID: \"33dd1ede-3c66-4876-8ada-0ed81db2d705\") " pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.269176 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abead70-db1e-4e93-9e76-214427aa519a-operator-scripts\") pod \"neutron-db-create-nddft\" (UID: \"6abead70-db1e-4e93-9e76-214427aa519a\") " pod="openstack/neutron-db-create-nddft" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.273164 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-config-data\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.275203 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-combined-ca-bundle\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.295473 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrs6v\" (UniqueName: \"kubernetes.io/projected/02f4db43-1e9b-4fb5-8781-aeb301bc1298-kube-api-access-mrs6v\") pod \"keystone-db-sync-767c2\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.302296 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxqzj\" (UniqueName: \"kubernetes.io/projected/33dd1ede-3c66-4876-8ada-0ed81db2d705-kube-api-access-cxqzj\") pod \"cinder-a758-account-create-update-cjr6t\" (UID: \"33dd1ede-3c66-4876-8ada-0ed81db2d705\") " pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.303208 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvr6j\" (UniqueName: \"kubernetes.io/projected/6abead70-db1e-4e93-9e76-214427aa519a-kube-api-access-gvr6j\") pod \"neutron-db-create-nddft\" (UID: \"6abead70-db1e-4e93-9e76-214427aa519a\") " pod="openstack/neutron-db-create-nddft" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.376047 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nddft" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.382492 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-53af-account-create-update-z7v7c"] Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.383374 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.386672 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.389885 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-767c2" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.400999 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-53af-account-create-update-z7v7c"] Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.439324 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.471620 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lrbn\" (UniqueName: \"kubernetes.io/projected/b0a2c89e-c85a-407d-a759-ac0851d0636f-kube-api-access-7lrbn\") pod \"neutron-53af-account-create-update-z7v7c\" (UID: \"b0a2c89e-c85a-407d-a759-ac0851d0636f\") " pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.471736 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a2c89e-c85a-407d-a759-ac0851d0636f-operator-scripts\") pod \"neutron-53af-account-create-update-z7v7c\" (UID: \"b0a2c89e-c85a-407d-a759-ac0851d0636f\") " pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.573643 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lrbn\" (UniqueName: \"kubernetes.io/projected/b0a2c89e-c85a-407d-a759-ac0851d0636f-kube-api-access-7lrbn\") pod \"neutron-53af-account-create-update-z7v7c\" (UID: \"b0a2c89e-c85a-407d-a759-ac0851d0636f\") " pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.573752 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a2c89e-c85a-407d-a759-ac0851d0636f-operator-scripts\") pod \"neutron-53af-account-create-update-z7v7c\" (UID: \"b0a2c89e-c85a-407d-a759-ac0851d0636f\") " pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.574446 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a2c89e-c85a-407d-a759-ac0851d0636f-operator-scripts\") pod \"neutron-53af-account-create-update-z7v7c\" (UID: \"b0a2c89e-c85a-407d-a759-ac0851d0636f\") " pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.595321 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lrbn\" (UniqueName: \"kubernetes.io/projected/b0a2c89e-c85a-407d-a759-ac0851d0636f-kube-api-access-7lrbn\") pod \"neutron-53af-account-create-update-z7v7c\" (UID: \"b0a2c89e-c85a-407d-a759-ac0851d0636f\") " pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:17:52 crc kubenswrapper[4872]: I0203 06:17:52.700649 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:17:55 crc kubenswrapper[4872]: E0203 06:17:55.865611 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 03 06:17:55 crc kubenswrapper[4872]: E0203 06:17:55.866242 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk4h6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-6j26n_openstack(5a290b0d-8a5a-426e-9561-9372ba41afb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:17:55 crc kubenswrapper[4872]: E0203 06:17:55.868459 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-6j26n" podUID="5a290b0d-8a5a-426e-9561-9372ba41afb5" Feb 03 06:17:55 crc kubenswrapper[4872]: I0203 06:17:55.879322 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.040463 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d48c0f3-0353-4e0d-a5af-089233c0ab65-operator-scripts\") pod \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\" (UID: \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\") " Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.040869 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs9rp\" (UniqueName: \"kubernetes.io/projected/7d48c0f3-0353-4e0d-a5af-089233c0ab65-kube-api-access-fs9rp\") pod \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\" (UID: \"7d48c0f3-0353-4e0d-a5af-089233c0ab65\") " Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.041209 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d48c0f3-0353-4e0d-a5af-089233c0ab65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d48c0f3-0353-4e0d-a5af-089233c0ab65" (UID: "7d48c0f3-0353-4e0d-a5af-089233c0ab65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.041320 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d48c0f3-0353-4e0d-a5af-089233c0ab65-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.054168 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d48c0f3-0353-4e0d-a5af-089233c0ab65-kube-api-access-fs9rp" (OuterVolumeSpecName: "kube-api-access-fs9rp") pod "7d48c0f3-0353-4e0d-a5af-089233c0ab65" (UID: "7d48c0f3-0353-4e0d-a5af-089233c0ab65"). InnerVolumeSpecName "kube-api-access-fs9rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:17:56 crc kubenswrapper[4872]: E0203 06:17:56.097469 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-container:current-podified" Feb 03 06:17:56 crc kubenswrapper[4872]: E0203 06:17:56.097634 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-server,Image:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,Command:[/usr/bin/swift-container-server /etc/swift/container-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:container,HostPort:0,ContainerPort:6201,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56h6bh67h56h67bh5c4hdfh58bhb4h65fh64h5cch7fh54dh9ch54bh677h65dh655h578h676h646hf4h64fh65h5bdh54h9chc5h54bh97h78q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tzd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(53916dd7-8beb-48bb-8689-5693b2b3cf6f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.143027 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs9rp\" (UniqueName: \"kubernetes.io/projected/7d48c0f3-0353-4e0d-a5af-089233c0ab65-kube-api-access-fs9rp\") on node \"crc\" DevicePath \"\"" Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.513100 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7rrj2" event={"ID":"7d48c0f3-0353-4e0d-a5af-089233c0ab65","Type":"ContainerDied","Data":"47597873ab6c8c2cbd300c320cdba67559df5c88f3df6365d53488b41c325f11"} Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.513389 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47597873ab6c8c2cbd300c320cdba67559df5c88f3df6365d53488b41c325f11" Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.513511 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7rrj2" Feb 03 06:17:56 crc kubenswrapper[4872]: E0203 06:17:56.517895 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-6j26n" podUID="5a290b0d-8a5a-426e-9561-9372ba41afb5" Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.806505 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v7btd"] Feb 03 06:17:56 crc kubenswrapper[4872]: W0203 06:17:56.856038 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953773a5_8de7_4b75_9e0b_a0effcbb297c.slice/crio-55a2c515029a3f801627bf68b4a5ae495d947557136e119a53a10b4485a4eb3d WatchSource:0}: Error finding container 55a2c515029a3f801627bf68b4a5ae495d947557136e119a53a10b4485a4eb3d: Status 404 returned error can't find the container with id 55a2c515029a3f801627bf68b4a5ae495d947557136e119a53a10b4485a4eb3d Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.859943 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dvnrz"] Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.942135 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-daec-account-create-update-kb2fr"] Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.958602 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nddft"] Feb 03 06:17:56 crc kubenswrapper[4872]: I0203 06:17:56.964748 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-767c2"] Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.075156 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-53af-account-create-update-z7v7c"] Feb 03 06:17:57 crc kubenswrapper[4872]: W0203 06:17:57.080881 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0a2c89e_c85a_407d_a759_ac0851d0636f.slice/crio-bdac3f0df7420288e12d9021d5449d95f98bbc8144498f9055ad7273745ac6b6 WatchSource:0}: Error finding container bdac3f0df7420288e12d9021d5449d95f98bbc8144498f9055ad7273745ac6b6: Status 404 returned error can't find the container with id bdac3f0df7420288e12d9021d5449d95f98bbc8144498f9055ad7273745ac6b6 Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.085071 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a758-account-create-update-cjr6t"] Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.522538 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvnrz" event={"ID":"953773a5-8de7-4b75-9e0b-a0effcbb297c","Type":"ContainerStarted","Data":"24ed544bc2f8dc45039c6c2f1503b44dfa31ecf46666012fd1bdc7f1737b1fa2"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.522834 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvnrz" event={"ID":"953773a5-8de7-4b75-9e0b-a0effcbb297c","Type":"ContainerStarted","Data":"55a2c515029a3f801627bf68b4a5ae495d947557136e119a53a10b4485a4eb3d"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.524567 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nddft" event={"ID":"6abead70-db1e-4e93-9e76-214427aa519a","Type":"ContainerStarted","Data":"d7a504265e555e9ade9f829748d64c7eaef3223ef8757798f1eab69b56e9c378"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.524607 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nddft" event={"ID":"6abead70-db1e-4e93-9e76-214427aa519a","Type":"ContainerStarted","Data":"138f6a84b28325849d5f0f2d410c59ba8fe350f472daf28ce9fc8ffee859a8b8"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.525828 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-daec-account-create-update-kb2fr" event={"ID":"897f2bae-39b3-4862-b0a9-d9652b593e98","Type":"ContainerStarted","Data":"9cfd4bdac0a9127767f2a93034ca9fbd595b79db4fdad6a038383cdf6d93a465"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.525983 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-daec-account-create-update-kb2fr" event={"ID":"897f2bae-39b3-4862-b0a9-d9652b593e98","Type":"ContainerStarted","Data":"45d72dadcc1dce5d1378fd0f1702cc7fafd6bcf3c45d84316a7cd008b86ad71f"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.528540 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-767c2" event={"ID":"02f4db43-1e9b-4fb5-8781-aeb301bc1298","Type":"ContainerStarted","Data":"d206861603d61dee9c32229c475ab4e3dd9eef9aa567cbf1f1932858236704d9"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.530549 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7btd" event={"ID":"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf","Type":"ContainerStarted","Data":"59c38abc9ba32cbc9ab7b9054c74051aa77e09e322dbe8fb028a930171ab21e5"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.530740 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7btd" event={"ID":"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf","Type":"ContainerStarted","Data":"70218b33775210b20476edc18680dc0b8acba3bf7fe3cbd76ac9c6eca56b4850"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.531965 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a758-account-create-update-cjr6t" event={"ID":"33dd1ede-3c66-4876-8ada-0ed81db2d705","Type":"ContainerStarted","Data":"9139c28bb4be76837aa461dba83b3e1b978a20ba6143d96447f984346aabf17a"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.532134 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a758-account-create-update-cjr6t" event={"ID":"33dd1ede-3c66-4876-8ada-0ed81db2d705","Type":"ContainerStarted","Data":"c7633ff695ac984e59d36c2d9f50dcf1f6b302519167420673c0628e2e2e37fe"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.533523 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53af-account-create-update-z7v7c" event={"ID":"b0a2c89e-c85a-407d-a759-ac0851d0636f","Type":"ContainerStarted","Data":"636ebcf7d176b4ce5a7a48e93649aeec52ded44c6b0e32e0c5bbffb0567212d5"} Feb 03 06:17:57 crc kubenswrapper[4872]: I0203 06:17:57.533635 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53af-account-create-update-z7v7c" event={"ID":"b0a2c89e-c85a-407d-a759-ac0851d0636f","Type":"ContainerStarted","Data":"bdac3f0df7420288e12d9021d5449d95f98bbc8144498f9055ad7273745ac6b6"} Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.546860 4872 generic.go:334] "Generic (PLEG): container finished" podID="897f2bae-39b3-4862-b0a9-d9652b593e98" containerID="9cfd4bdac0a9127767f2a93034ca9fbd595b79db4fdad6a038383cdf6d93a465" exitCode=0 Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.546910 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-daec-account-create-update-kb2fr" event={"ID":"897f2bae-39b3-4862-b0a9-d9652b593e98","Type":"ContainerDied","Data":"9cfd4bdac0a9127767f2a93034ca9fbd595b79db4fdad6a038383cdf6d93a465"} Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.567785 4872 generic.go:334] "Generic (PLEG): container finished" podID="6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf" containerID="59c38abc9ba32cbc9ab7b9054c74051aa77e09e322dbe8fb028a930171ab21e5" exitCode=0 Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.567871 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7btd" event={"ID":"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf","Type":"ContainerDied","Data":"59c38abc9ba32cbc9ab7b9054c74051aa77e09e322dbe8fb028a930171ab21e5"} Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.569573 4872 generic.go:334] "Generic (PLEG): container finished" podID="953773a5-8de7-4b75-9e0b-a0effcbb297c" containerID="24ed544bc2f8dc45039c6c2f1503b44dfa31ecf46666012fd1bdc7f1737b1fa2" exitCode=0 Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.569616 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvnrz" event={"ID":"953773a5-8de7-4b75-9e0b-a0effcbb297c","Type":"ContainerDied","Data":"24ed544bc2f8dc45039c6c2f1503b44dfa31ecf46666012fd1bdc7f1737b1fa2"} Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.575985 4872 generic.go:334] "Generic (PLEG): container finished" podID="6abead70-db1e-4e93-9e76-214427aa519a" containerID="d7a504265e555e9ade9f829748d64c7eaef3223ef8757798f1eab69b56e9c378" exitCode=0 Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.576048 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nddft" event={"ID":"6abead70-db1e-4e93-9e76-214427aa519a","Type":"ContainerDied","Data":"d7a504265e555e9ade9f829748d64c7eaef3223ef8757798f1eab69b56e9c378"} Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.615826 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-53af-account-create-update-z7v7c" podStartSLOduration=6.615805686 podStartE2EDuration="6.615805686s" podCreationTimestamp="2026-02-03 06:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:58.607477635 +0000 UTC m=+1049.190169059" watchObservedRunningTime="2026-02-03 06:17:58.615805686 +0000 UTC m=+1049.198497120" Feb 03 06:17:58 crc kubenswrapper[4872]: I0203 06:17:58.642368 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a758-account-create-update-cjr6t" podStartSLOduration=7.642349926 podStartE2EDuration="7.642349926s" podCreationTimestamp="2026-02-03 06:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:17:58.63169433 +0000 UTC m=+1049.214385744" watchObservedRunningTime="2026-02-03 06:17:58.642349926 +0000 UTC m=+1049.225041340" Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.588855 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"e5f75ad3a9c4e6600e0fb29d16142af6730544202084ee5e3bd97142227be397"} Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.588892 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"fbbbf02ed8cfef961a0fbeac70d021cae66bf497819a640a4df4e890bdcdea10"} Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.588903 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"06e28a4cbf322b7a8554858cf2f32cc51327e4d2747a165d0c2eca7110e8bb2d"} Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.588911 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"8132160e8c00b851bc3194128bf737dbbb923771f205a991ab936de49bd5b7cc"} Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.588921 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"005de85a10b1e1cdd0dcd6c20f00e024f7e1fb4eec8dc44110e78e956d360893"} Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.590107 4872 generic.go:334] "Generic (PLEG): container finished" podID="b0a2c89e-c85a-407d-a759-ac0851d0636f" containerID="636ebcf7d176b4ce5a7a48e93649aeec52ded44c6b0e32e0c5bbffb0567212d5" exitCode=0 Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.590160 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53af-account-create-update-z7v7c" event={"ID":"b0a2c89e-c85a-407d-a759-ac0851d0636f","Type":"ContainerDied","Data":"636ebcf7d176b4ce5a7a48e93649aeec52ded44c6b0e32e0c5bbffb0567212d5"} Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.592829 4872 generic.go:334] "Generic (PLEG): container finished" podID="33dd1ede-3c66-4876-8ada-0ed81db2d705" containerID="9139c28bb4be76837aa461dba83b3e1b978a20ba6143d96447f984346aabf17a" exitCode=0 Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.592958 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a758-account-create-update-cjr6t" event={"ID":"33dd1ede-3c66-4876-8ada-0ed81db2d705","Type":"ContainerDied","Data":"9139c28bb4be76837aa461dba83b3e1b978a20ba6143d96447f984346aabf17a"} Feb 03 06:17:59 crc kubenswrapper[4872]: I0203 06:17:59.920805 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nddft" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.017140 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7btd" Feb 03 06:18:00 crc kubenswrapper[4872]: E0203 06:18:00.031063 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"container-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="53916dd7-8beb-48bb-8689-5693b2b3cf6f" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.050635 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abead70-db1e-4e93-9e76-214427aa519a-operator-scripts\") pod \"6abead70-db1e-4e93-9e76-214427aa519a\" (UID: \"6abead70-db1e-4e93-9e76-214427aa519a\") " Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.050757 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvr6j\" (UniqueName: \"kubernetes.io/projected/6abead70-db1e-4e93-9e76-214427aa519a-kube-api-access-gvr6j\") pod \"6abead70-db1e-4e93-9e76-214427aa519a\" (UID: \"6abead70-db1e-4e93-9e76-214427aa519a\") " Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.052633 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6abead70-db1e-4e93-9e76-214427aa519a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6abead70-db1e-4e93-9e76-214427aa519a" (UID: "6abead70-db1e-4e93-9e76-214427aa519a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.064993 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6abead70-db1e-4e93-9e76-214427aa519a-kube-api-access-gvr6j" (OuterVolumeSpecName: "kube-api-access-gvr6j") pod "6abead70-db1e-4e93-9e76-214427aa519a" (UID: "6abead70-db1e-4e93-9e76-214427aa519a"). InnerVolumeSpecName "kube-api-access-gvr6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.113788 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvnrz" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.153167 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsltd\" (UniqueName: \"kubernetes.io/projected/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-kube-api-access-nsltd\") pod \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\" (UID: \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\") " Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.153320 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-operator-scripts\") pod \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\" (UID: \"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf\") " Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.154605 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf" (UID: "6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.157534 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvr6j\" (UniqueName: \"kubernetes.io/projected/6abead70-db1e-4e93-9e76-214427aa519a-kube-api-access-gvr6j\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.157915 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.157934 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6abead70-db1e-4e93-9e76-214427aa519a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.162334 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.171780 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-kube-api-access-nsltd" (OuterVolumeSpecName: "kube-api-access-nsltd") pod "6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf" (UID: "6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf"). InnerVolumeSpecName "kube-api-access-nsltd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.258628 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953773a5-8de7-4b75-9e0b-a0effcbb297c-operator-scripts\") pod \"953773a5-8de7-4b75-9e0b-a0effcbb297c\" (UID: \"953773a5-8de7-4b75-9e0b-a0effcbb297c\") " Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.258780 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whbhj\" (UniqueName: \"kubernetes.io/projected/953773a5-8de7-4b75-9e0b-a0effcbb297c-kube-api-access-whbhj\") pod \"953773a5-8de7-4b75-9e0b-a0effcbb297c\" (UID: \"953773a5-8de7-4b75-9e0b-a0effcbb297c\") " Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.259176 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsltd\" (UniqueName: \"kubernetes.io/projected/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf-kube-api-access-nsltd\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.260287 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953773a5-8de7-4b75-9e0b-a0effcbb297c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "953773a5-8de7-4b75-9e0b-a0effcbb297c" (UID: "953773a5-8de7-4b75-9e0b-a0effcbb297c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.263289 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953773a5-8de7-4b75-9e0b-a0effcbb297c-kube-api-access-whbhj" (OuterVolumeSpecName: "kube-api-access-whbhj") pod "953773a5-8de7-4b75-9e0b-a0effcbb297c" (UID: "953773a5-8de7-4b75-9e0b-a0effcbb297c"). InnerVolumeSpecName "kube-api-access-whbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.360245 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjnnw\" (UniqueName: \"kubernetes.io/projected/897f2bae-39b3-4862-b0a9-d9652b593e98-kube-api-access-pjnnw\") pod \"897f2bae-39b3-4862-b0a9-d9652b593e98\" (UID: \"897f2bae-39b3-4862-b0a9-d9652b593e98\") " Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.360303 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897f2bae-39b3-4862-b0a9-d9652b593e98-operator-scripts\") pod \"897f2bae-39b3-4862-b0a9-d9652b593e98\" (UID: \"897f2bae-39b3-4862-b0a9-d9652b593e98\") " Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.360725 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953773a5-8de7-4b75-9e0b-a0effcbb297c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.360742 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whbhj\" (UniqueName: \"kubernetes.io/projected/953773a5-8de7-4b75-9e0b-a0effcbb297c-kube-api-access-whbhj\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.361105 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897f2bae-39b3-4862-b0a9-d9652b593e98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "897f2bae-39b3-4862-b0a9-d9652b593e98" (UID: "897f2bae-39b3-4862-b0a9-d9652b593e98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.363542 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897f2bae-39b3-4862-b0a9-d9652b593e98-kube-api-access-pjnnw" (OuterVolumeSpecName: "kube-api-access-pjnnw") pod "897f2bae-39b3-4862-b0a9-d9652b593e98" (UID: "897f2bae-39b3-4862-b0a9-d9652b593e98"). InnerVolumeSpecName "kube-api-access-pjnnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.462168 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjnnw\" (UniqueName: \"kubernetes.io/projected/897f2bae-39b3-4862-b0a9-d9652b593e98-kube-api-access-pjnnw\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.462198 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/897f2bae-39b3-4862-b0a9-d9652b593e98-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.604070 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v7btd" event={"ID":"6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf","Type":"ContainerDied","Data":"70218b33775210b20476edc18680dc0b8acba3bf7fe3cbd76ac9c6eca56b4850"} Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.604110 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70218b33775210b20476edc18680dc0b8acba3bf7fe3cbd76ac9c6eca56b4850" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.604168 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v7btd" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.613131 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"0f739603f8108a987d99fcf51988f45ed798865012e48a40d689ae35e0ead707"} Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.613168 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"bba4d2122129581b5cd590d40ce539f5e32533a41b671b31e686d460b9c1cbf4"} Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.616264 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dvnrz" event={"ID":"953773a5-8de7-4b75-9e0b-a0effcbb297c","Type":"ContainerDied","Data":"55a2c515029a3f801627bf68b4a5ae495d947557136e119a53a10b4485a4eb3d"} Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.616288 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a2c515029a3f801627bf68b4a5ae495d947557136e119a53a10b4485a4eb3d" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.616324 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dvnrz" Feb 03 06:18:00 crc kubenswrapper[4872]: E0203 06:18:00.617247 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="53916dd7-8beb-48bb-8689-5693b2b3cf6f" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.619366 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nddft" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.619364 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nddft" event={"ID":"6abead70-db1e-4e93-9e76-214427aa519a","Type":"ContainerDied","Data":"138f6a84b28325849d5f0f2d410c59ba8fe350f472daf28ce9fc8ffee859a8b8"} Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.619510 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138f6a84b28325849d5f0f2d410c59ba8fe350f472daf28ce9fc8ffee859a8b8" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.621615 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-daec-account-create-update-kb2fr" event={"ID":"897f2bae-39b3-4862-b0a9-d9652b593e98","Type":"ContainerDied","Data":"45d72dadcc1dce5d1378fd0f1702cc7fafd6bcf3c45d84316a7cd008b86ad71f"} Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.621650 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d72dadcc1dce5d1378fd0f1702cc7fafd6bcf3c45d84316a7cd008b86ad71f" Feb 03 06:18:00 crc kubenswrapper[4872]: I0203 06:18:00.621665 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-daec-account-create-update-kb2fr" Feb 03 06:18:01 crc kubenswrapper[4872]: I0203 06:18:01.271709 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:18:01 crc kubenswrapper[4872]: I0203 06:18:01.271996 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:18:01 crc kubenswrapper[4872]: E0203 06:18:01.645511 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="53916dd7-8beb-48bb-8689-5693b2b3cf6f" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.139324 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.222404 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.230763 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxqzj\" (UniqueName: \"kubernetes.io/projected/33dd1ede-3c66-4876-8ada-0ed81db2d705-kube-api-access-cxqzj\") pod \"33dd1ede-3c66-4876-8ada-0ed81db2d705\" (UID: \"33dd1ede-3c66-4876-8ada-0ed81db2d705\") " Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.230911 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33dd1ede-3c66-4876-8ada-0ed81db2d705-operator-scripts\") pod \"33dd1ede-3c66-4876-8ada-0ed81db2d705\" (UID: \"33dd1ede-3c66-4876-8ada-0ed81db2d705\") " Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.232835 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33dd1ede-3c66-4876-8ada-0ed81db2d705-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33dd1ede-3c66-4876-8ada-0ed81db2d705" (UID: "33dd1ede-3c66-4876-8ada-0ed81db2d705"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.235000 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33dd1ede-3c66-4876-8ada-0ed81db2d705-kube-api-access-cxqzj" (OuterVolumeSpecName: "kube-api-access-cxqzj") pod "33dd1ede-3c66-4876-8ada-0ed81db2d705" (UID: "33dd1ede-3c66-4876-8ada-0ed81db2d705"). InnerVolumeSpecName "kube-api-access-cxqzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.332724 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lrbn\" (UniqueName: \"kubernetes.io/projected/b0a2c89e-c85a-407d-a759-ac0851d0636f-kube-api-access-7lrbn\") pod \"b0a2c89e-c85a-407d-a759-ac0851d0636f\" (UID: \"b0a2c89e-c85a-407d-a759-ac0851d0636f\") " Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.332789 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a2c89e-c85a-407d-a759-ac0851d0636f-operator-scripts\") pod \"b0a2c89e-c85a-407d-a759-ac0851d0636f\" (UID: \"b0a2c89e-c85a-407d-a759-ac0851d0636f\") " Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.333096 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33dd1ede-3c66-4876-8ada-0ed81db2d705-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.333106 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxqzj\" (UniqueName: \"kubernetes.io/projected/33dd1ede-3c66-4876-8ada-0ed81db2d705-kube-api-access-cxqzj\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.333366 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a2c89e-c85a-407d-a759-ac0851d0636f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0a2c89e-c85a-407d-a759-ac0851d0636f" (UID: "b0a2c89e-c85a-407d-a759-ac0851d0636f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.337776 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a2c89e-c85a-407d-a759-ac0851d0636f-kube-api-access-7lrbn" (OuterVolumeSpecName: "kube-api-access-7lrbn") pod "b0a2c89e-c85a-407d-a759-ac0851d0636f" (UID: "b0a2c89e-c85a-407d-a759-ac0851d0636f"). InnerVolumeSpecName "kube-api-access-7lrbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.434412 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lrbn\" (UniqueName: \"kubernetes.io/projected/b0a2c89e-c85a-407d-a759-ac0851d0636f-kube-api-access-7lrbn\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.434445 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0a2c89e-c85a-407d-a759-ac0851d0636f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.665294 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a758-account-create-update-cjr6t" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.665575 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a758-account-create-update-cjr6t" event={"ID":"33dd1ede-3c66-4876-8ada-0ed81db2d705","Type":"ContainerDied","Data":"c7633ff695ac984e59d36c2d9f50dcf1f6b302519167420673c0628e2e2e37fe"} Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.666589 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7633ff695ac984e59d36c2d9f50dcf1f6b302519167420673c0628e2e2e37fe" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.669619 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-53af-account-create-update-z7v7c" event={"ID":"b0a2c89e-c85a-407d-a759-ac0851d0636f","Type":"ContainerDied","Data":"bdac3f0df7420288e12d9021d5449d95f98bbc8144498f9055ad7273745ac6b6"} Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.669648 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-53af-account-create-update-z7v7c" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.669718 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdac3f0df7420288e12d9021d5449d95f98bbc8144498f9055ad7273745ac6b6" Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.672918 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-767c2" event={"ID":"02f4db43-1e9b-4fb5-8781-aeb301bc1298","Type":"ContainerStarted","Data":"e18fcfe1cfbc6f2f3cd2fdac470a460461c236fab47fe71b3826ab2aef0c995f"} Feb 03 06:18:04 crc kubenswrapper[4872]: I0203 06:18:04.697006 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-767c2" podStartSLOduration=6.563784358 podStartE2EDuration="13.696974995s" podCreationTimestamp="2026-02-03 06:17:51 +0000 UTC" firstStartedPulling="2026-02-03 06:17:57.000434684 +0000 UTC m=+1047.583126108" lastFinishedPulling="2026-02-03 06:18:04.133625331 +0000 UTC m=+1054.716316745" observedRunningTime="2026-02-03 06:18:04.695993002 +0000 UTC m=+1055.278684416" watchObservedRunningTime="2026-02-03 06:18:04.696974995 +0000 UTC m=+1055.279666459" Feb 03 06:18:08 crc kubenswrapper[4872]: I0203 06:18:08.727549 4872 generic.go:334] "Generic (PLEG): container finished" podID="02f4db43-1e9b-4fb5-8781-aeb301bc1298" containerID="e18fcfe1cfbc6f2f3cd2fdac470a460461c236fab47fe71b3826ab2aef0c995f" exitCode=0 Feb 03 06:18:08 crc kubenswrapper[4872]: I0203 06:18:08.727732 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-767c2" event={"ID":"02f4db43-1e9b-4fb5-8781-aeb301bc1298","Type":"ContainerDied","Data":"e18fcfe1cfbc6f2f3cd2fdac470a460461c236fab47fe71b3826ab2aef0c995f"} Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.112059 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-767c2" Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.139922 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-combined-ca-bundle\") pod \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.140293 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-config-data\") pod \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.140457 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrs6v\" (UniqueName: \"kubernetes.io/projected/02f4db43-1e9b-4fb5-8781-aeb301bc1298-kube-api-access-mrs6v\") pod \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\" (UID: \"02f4db43-1e9b-4fb5-8781-aeb301bc1298\") " Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.175505 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f4db43-1e9b-4fb5-8781-aeb301bc1298-kube-api-access-mrs6v" (OuterVolumeSpecName: "kube-api-access-mrs6v") pod "02f4db43-1e9b-4fb5-8781-aeb301bc1298" (UID: "02f4db43-1e9b-4fb5-8781-aeb301bc1298"). InnerVolumeSpecName "kube-api-access-mrs6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.193887 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02f4db43-1e9b-4fb5-8781-aeb301bc1298" (UID: "02f4db43-1e9b-4fb5-8781-aeb301bc1298"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.216311 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-config-data" (OuterVolumeSpecName: "config-data") pod "02f4db43-1e9b-4fb5-8781-aeb301bc1298" (UID: "02f4db43-1e9b-4fb5-8781-aeb301bc1298"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.242969 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrs6v\" (UniqueName: \"kubernetes.io/projected/02f4db43-1e9b-4fb5-8781-aeb301bc1298-kube-api-access-mrs6v\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.242993 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.243004 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f4db43-1e9b-4fb5-8781-aeb301bc1298-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.755187 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-767c2" event={"ID":"02f4db43-1e9b-4fb5-8781-aeb301bc1298","Type":"ContainerDied","Data":"d206861603d61dee9c32229c475ab4e3dd9eef9aa567cbf1f1932858236704d9"} Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.755496 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d206861603d61dee9c32229c475ab4e3dd9eef9aa567cbf1f1932858236704d9" Feb 03 06:18:10 crc kubenswrapper[4872]: I0203 06:18:10.755561 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-767c2" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.094780 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-4xgqw"] Feb 03 06:18:11 crc kubenswrapper[4872]: E0203 06:18:11.095498 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897f2bae-39b3-4862-b0a9-d9652b593e98" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.095514 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="897f2bae-39b3-4862-b0a9-d9652b593e98" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: E0203 06:18:11.095526 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953773a5-8de7-4b75-9e0b-a0effcbb297c" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.095533 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="953773a5-8de7-4b75-9e0b-a0effcbb297c" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: E0203 06:18:11.095547 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6abead70-db1e-4e93-9e76-214427aa519a" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.095554 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6abead70-db1e-4e93-9e76-214427aa519a" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: E0203 06:18:11.095585 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33dd1ede-3c66-4876-8ada-0ed81db2d705" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.095592 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="33dd1ede-3c66-4876-8ada-0ed81db2d705" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: E0203 06:18:11.095604 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f4db43-1e9b-4fb5-8781-aeb301bc1298" containerName="keystone-db-sync" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.095612 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f4db43-1e9b-4fb5-8781-aeb301bc1298" containerName="keystone-db-sync" Feb 03 06:18:11 crc kubenswrapper[4872]: E0203 06:18:11.095625 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.095633 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: E0203 06:18:11.095648 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d48c0f3-0353-4e0d-a5af-089233c0ab65" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.095657 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d48c0f3-0353-4e0d-a5af-089233c0ab65" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: E0203 06:18:11.095673 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a2c89e-c85a-407d-a759-ac0851d0636f" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.095681 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a2c89e-c85a-407d-a759-ac0851d0636f" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.100026 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="897f2bae-39b3-4862-b0a9-d9652b593e98" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.100059 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f4db43-1e9b-4fb5-8781-aeb301bc1298" containerName="keystone-db-sync" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.100067 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a2c89e-c85a-407d-a759-ac0851d0636f" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.100075 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="6abead70-db1e-4e93-9e76-214427aa519a" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.100085 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="953773a5-8de7-4b75-9e0b-a0effcbb297c" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.100100 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf" containerName="mariadb-database-create" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.100111 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="33dd1ede-3c66-4876-8ada-0ed81db2d705" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.100121 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d48c0f3-0353-4e0d-a5af-089233c0ab65" containerName="mariadb-account-create-update" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.101009 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.112603 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-4xgqw"] Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.158697 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r8gf\" (UniqueName: \"kubernetes.io/projected/240760e5-c8de-4e7c-90c3-bf852301edea-kube-api-access-8r8gf\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.158754 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-config\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.158800 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.158827 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.158847 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.164055 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2ng6x"] Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.165107 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.188972 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.189160 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mb5hw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.189258 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.189359 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.189564 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.211041 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2ng6x"] Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.260830 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.260887 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.260914 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-combined-ca-bundle\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.260953 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-scripts\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.260984 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-credential-keys\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.261033 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-fernet-keys\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.261063 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r8gf\" (UniqueName: \"kubernetes.io/projected/240760e5-c8de-4e7c-90c3-bf852301edea-kube-api-access-8r8gf\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.261102 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-config-data\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.261129 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-config\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.261200 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.261224 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr25w\" (UniqueName: \"kubernetes.io/projected/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-kube-api-access-nr25w\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.262238 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.263008 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.263805 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-config\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.268870 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.310381 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r8gf\" (UniqueName: \"kubernetes.io/projected/240760e5-c8de-4e7c-90c3-bf852301edea-kube-api-access-8r8gf\") pod \"dnsmasq-dns-5c9d85d47c-4xgqw\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.360173 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-m54jj"] Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.361127 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.362639 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-config-data\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.362720 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr25w\" (UniqueName: \"kubernetes.io/projected/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-kube-api-access-nr25w\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.362775 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-combined-ca-bundle\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.362795 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-scripts\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.362824 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-credential-keys\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.362857 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-fernet-keys\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.364987 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fd9l2" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.370055 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.372658 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.373619 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-config-data\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.375244 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-credential-keys\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.383811 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-m54jj"] Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.396193 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-scripts\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.397717 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-combined-ca-bundle\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.399966 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-fernet-keys\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.402363 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bf954ffc-8mhd9"] Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.415169 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.422385 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.422712 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.423427 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr25w\" (UniqueName: \"kubernetes.io/projected/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-kube-api-access-nr25w\") pod \"keystone-bootstrap-2ng6x\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.423535 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-zjswg" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.423663 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.435160 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf954ffc-8mhd9"] Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.453283 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475240 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc452\" (UniqueName: \"kubernetes.io/projected/9d670aec-b637-4fe6-b046-794d9628b49b-kube-api-access-pc452\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475285 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-config-data\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475317 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-db-sync-config-data\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475352 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-logs\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475487 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d670aec-b637-4fe6-b046-794d9628b49b-etc-machine-id\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475556 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-combined-ca-bundle\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475617 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndmz9\" (UniqueName: \"kubernetes.io/projected/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-kube-api-access-ndmz9\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475743 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-scripts\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475785 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-config-data\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475820 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-scripts\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.475842 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-horizon-secret-key\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.517294 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.579856 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-scripts\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.579909 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-config-data\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.579940 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-scripts\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.579965 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-horizon-secret-key\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.580009 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc452\" (UniqueName: \"kubernetes.io/projected/9d670aec-b637-4fe6-b046-794d9628b49b-kube-api-access-pc452\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.580037 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-config-data\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.580055 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-db-sync-config-data\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.580090 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-logs\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.580126 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d670aec-b637-4fe6-b046-794d9628b49b-etc-machine-id\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.580163 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-combined-ca-bundle\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.580203 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndmz9\" (UniqueName: \"kubernetes.io/projected/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-kube-api-access-ndmz9\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.582063 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-scripts\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.582125 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d670aec-b637-4fe6-b046-794d9628b49b-etc-machine-id\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.582236 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-logs\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.583668 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-config-data\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.594903 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-scripts\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.598546 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-combined-ca-bundle\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.625427 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-horizon-secret-key\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:11 crc kubenswrapper[4872]: I0203 06:18:11.631554 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-db-sync-config-data\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.634026 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-config-data\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.723236 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndmz9\" (UniqueName: \"kubernetes.io/projected/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-kube-api-access-ndmz9\") pod \"horizon-bf954ffc-8mhd9\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.723660 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc452\" (UniqueName: \"kubernetes.io/projected/9d670aec-b637-4fe6-b046-794d9628b49b-kube-api-access-pc452\") pod \"cinder-db-sync-m54jj\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.743569 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-95dn6"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.744626 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.762834 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.763059 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.763187 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xcn7f" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.777901 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-95dn6"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.900730 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dbkgp"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.901674 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.903884 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-config-data\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.903918 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e3e69a-6af1-439d-a9e4-2295a6206492-logs\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.903958 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47wx\" (UniqueName: \"kubernetes.io/projected/13e3e69a-6af1-439d-a9e4-2295a6206492-kube-api-access-q47wx\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.903979 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-combined-ca-bundle\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.903997 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-scripts\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.904014 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-config\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.904045 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qc2c\" (UniqueName: \"kubernetes.io/projected/9ca50ee1-d592-41c3-869f-480e7d3d02f8-kube-api-access-5qc2c\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.904062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-combined-ca-bundle\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.905616 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-4xgqw"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.923286 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.923492 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.923590 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mtt5j" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.924207 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m54jj" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.932030 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.934149 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.938990 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dbkgp"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.962125 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.963102 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:11.963335 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.020391 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-config-data\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.020423 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e3e69a-6af1-439d-a9e4-2295a6206492-logs\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.020465 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47wx\" (UniqueName: \"kubernetes.io/projected/13e3e69a-6af1-439d-a9e4-2295a6206492-kube-api-access-q47wx\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.020489 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-combined-ca-bundle\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.020507 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-scripts\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.020525 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-config\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.020568 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qc2c\" (UniqueName: \"kubernetes.io/projected/9ca50ee1-d592-41c3-869f-480e7d3d02f8-kube-api-access-5qc2c\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.020586 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-combined-ca-bundle\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.042700 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-combined-ca-bundle\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.043116 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-scripts\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.043976 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e3e69a-6af1-439d-a9e4-2295a6206492-logs\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.046583 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-config\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.051535 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-combined-ca-bundle\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.057318 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.058225 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-config-data\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.091416 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-sqnd6"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.092620 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.120673 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47wx\" (UniqueName: \"kubernetes.io/projected/13e3e69a-6af1-439d-a9e4-2295a6206492-kube-api-access-q47wx\") pod \"placement-db-sync-dbkgp\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.121881 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwwtl\" (UniqueName: \"kubernetes.io/projected/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-kube-api-access-dwwtl\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.121931 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.121956 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-run-httpd\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.122034 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-log-httpd\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.122056 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-config-data\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.122074 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.122097 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-scripts\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.137438 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qc2c\" (UniqueName: \"kubernetes.io/projected/9ca50ee1-d592-41c3-869f-480e7d3d02f8-kube-api-access-5qc2c\") pod \"neutron-db-sync-95dn6\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.186410 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-sqnd6"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.203771 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69f5c58cfc-sgggw"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.205193 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223567 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-run-httpd\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223617 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-config\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223705 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-log-httpd\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223723 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223746 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223771 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-config-data\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223786 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223811 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-scripts\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223867 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwwtl\" (UniqueName: \"kubernetes.io/projected/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-kube-api-access-dwwtl\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223897 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpfxr\" (UniqueName: \"kubernetes.io/projected/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-kube-api-access-bpfxr\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223922 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.223945 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.224390 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-run-httpd\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.225433 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-log-httpd\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.280322 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dbkgp" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.326927 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpfxr\" (UniqueName: \"kubernetes.io/projected/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-kube-api-access-bpfxr\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.326988 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-scripts\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.327007 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.327026 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-config\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.327067 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e832ac8-6556-46fd-88a6-b2ebc386cc14-horizon-secret-key\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.327085 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzcqv\" (UniqueName: \"kubernetes.io/projected/1e832ac8-6556-46fd-88a6-b2ebc386cc14-kube-api-access-bzcqv\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.327111 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-config-data\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.327131 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.327152 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.327196 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e832ac8-6556-46fd-88a6-b2ebc386cc14-logs\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.329126 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.329605 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-config\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.330152 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.330673 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.343125 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69f5c58cfc-sgggw"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.386997 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpfxr\" (UniqueName: \"kubernetes.io/projected/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-kube-api-access-bpfxr\") pod \"dnsmasq-dns-6ffb94d8ff-sqnd6\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.407338 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.407429 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.410222 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-scripts\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.414716 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-config-data\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.417953 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwwtl\" (UniqueName: \"kubernetes.io/projected/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-kube-api-access-dwwtl\") pod \"ceilometer-0\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.419667 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-95dn6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.420125 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.422362 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.430610 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e832ac8-6556-46fd-88a6-b2ebc386cc14-horizon-secret-key\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.430635 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzcqv\" (UniqueName: \"kubernetes.io/projected/1e832ac8-6556-46fd-88a6-b2ebc386cc14-kube-api-access-bzcqv\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.430665 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-config-data\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.430727 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e832ac8-6556-46fd-88a6-b2ebc386cc14-logs\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.430792 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-scripts\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.431657 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-scripts\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.434438 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-config-data\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.435624 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e832ac8-6556-46fd-88a6-b2ebc386cc14-logs\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.438865 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e832ac8-6556-46fd-88a6-b2ebc386cc14-horizon-secret-key\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.456140 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cr92l"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.457976 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.464276 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vxf4x" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.465090 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.492385 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzcqv\" (UniqueName: \"kubernetes.io/projected/1e832ac8-6556-46fd-88a6-b2ebc386cc14-kube-api-access-bzcqv\") pod \"horizon-69f5c58cfc-sgggw\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.517754 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cr92l"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.638316 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zp8j\" (UniqueName: \"kubernetes.io/projected/48007ee1-953a-42c7-9279-2f348eb7bffb-kube-api-access-9zp8j\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.638393 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-combined-ca-bundle\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.638419 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-db-sync-config-data\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.735063 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.740738 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zp8j\" (UniqueName: \"kubernetes.io/projected/48007ee1-953a-42c7-9279-2f348eb7bffb-kube-api-access-9zp8j\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.740811 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-combined-ca-bundle\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.740838 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-db-sync-config-data\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.749584 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-combined-ca-bundle\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.749714 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-db-sync-config-data\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.773215 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zp8j\" (UniqueName: \"kubernetes.io/projected/48007ee1-953a-42c7-9279-2f348eb7bffb-kube-api-access-9zp8j\") pod \"barbican-db-sync-cr92l\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.820426 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6j26n" event={"ID":"5a290b0d-8a5a-426e-9561-9372ba41afb5","Type":"ContainerStarted","Data":"f8acfac2bb75cc05862d7541df57c2245ba35a80a74808ba4a40f6f6c0391ba6"} Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:12.824806 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cr92l" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.754739 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf954ffc-8mhd9"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.809561 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bdcd6ddbf-zq5br"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.814486 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.830485 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bdcd6ddbf-zq5br"] Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.884468 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6j26n" podStartSLOduration=4.997034532 podStartE2EDuration="39.884450116s" podCreationTimestamp="2026-02-03 06:17:34 +0000 UTC" firstStartedPulling="2026-02-03 06:17:35.736894879 +0000 UTC m=+1026.319586303" lastFinishedPulling="2026-02-03 06:18:10.624310473 +0000 UTC m=+1061.207001887" observedRunningTime="2026-02-03 06:18:13.881201188 +0000 UTC m=+1064.463892602" watchObservedRunningTime="2026-02-03 06:18:13.884450116 +0000 UTC m=+1064.467141530" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.963319 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrg5\" (UniqueName: \"kubernetes.io/projected/585fab82-3ee7-4833-9e6a-63d58e40867b-kube-api-access-7xrg5\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.963785 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-config-data\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.963814 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/585fab82-3ee7-4833-9e6a-63d58e40867b-horizon-secret-key\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.963873 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-scripts\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:13 crc kubenswrapper[4872]: I0203 06:18:13.963916 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585fab82-3ee7-4833-9e6a-63d58e40867b-logs\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.065175 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-config-data\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.065222 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/585fab82-3ee7-4833-9e6a-63d58e40867b-horizon-secret-key\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.065269 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-scripts\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.065298 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585fab82-3ee7-4833-9e6a-63d58e40867b-logs\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.065388 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrg5\" (UniqueName: \"kubernetes.io/projected/585fab82-3ee7-4833-9e6a-63d58e40867b-kube-api-access-7xrg5\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.066400 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-config-data\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.066568 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585fab82-3ee7-4833-9e6a-63d58e40867b-logs\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.066636 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-scripts\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.110170 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/585fab82-3ee7-4833-9e6a-63d58e40867b-horizon-secret-key\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.123256 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrg5\" (UniqueName: \"kubernetes.io/projected/585fab82-3ee7-4833-9e6a-63d58e40867b-kube-api-access-7xrg5\") pod \"horizon-5bdcd6ddbf-zq5br\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.144918 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.206533 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69f5c58cfc-sgggw"] Feb 03 06:18:14 crc kubenswrapper[4872]: W0203 06:18:14.207336 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e832ac8_6556_46fd_88a6_b2ebc386cc14.slice/crio-cf291d867b89bd9d88bfdf226a7812a96324104feaab0239c9e9292e534558a5 WatchSource:0}: Error finding container cf291d867b89bd9d88bfdf226a7812a96324104feaab0239c9e9292e534558a5: Status 404 returned error can't find the container with id cf291d867b89bd9d88bfdf226a7812a96324104feaab0239c9e9292e534558a5 Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.257174 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.345073 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dbkgp"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.379989 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf954ffc-8mhd9"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.450844 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-m54jj"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.468825 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-95dn6"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.481257 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2ng6x"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.493167 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-sqnd6"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.499778 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-4xgqw"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.513743 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:18:14 crc kubenswrapper[4872]: W0203 06:18:14.516751 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ca50ee1_d592_41c3_869f_480e7d3d02f8.slice/crio-e28a74e67c9efcb5fe027ec1062febae51d6015d3a96e19dcc2e250bb3e28067 WatchSource:0}: Error finding container e28a74e67c9efcb5fe027ec1062febae51d6015d3a96e19dcc2e250bb3e28067: Status 404 returned error can't find the container with id e28a74e67c9efcb5fe027ec1062febae51d6015d3a96e19dcc2e250bb3e28067 Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.522488 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cr92l"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.836556 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m54jj" event={"ID":"9d670aec-b637-4fe6-b046-794d9628b49b","Type":"ContainerStarted","Data":"fb31fd8f3767e9facceaf103b2500726ac80378d7ad850357fe51a0a22f0af9a"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.840189 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-95dn6" event={"ID":"9ca50ee1-d592-41c3-869f-480e7d3d02f8","Type":"ContainerStarted","Data":"e28a74e67c9efcb5fe027ec1062febae51d6015d3a96e19dcc2e250bb3e28067"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.848007 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" event={"ID":"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad","Type":"ContainerStarted","Data":"baabba7fe4750c0dc1975c09e450ae563772c70f0f4f6bde9da16606e7b35aba"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.855669 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5c58cfc-sgggw" event={"ID":"1e832ac8-6556-46fd-88a6-b2ebc386cc14","Type":"ContainerStarted","Data":"cf291d867b89bd9d88bfdf226a7812a96324104feaab0239c9e9292e534558a5"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.861018 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" event={"ID":"240760e5-c8de-4e7c-90c3-bf852301edea","Type":"ContainerStarted","Data":"ebcc49101a112f000cd14032f5ac1f2baa8adb13d9bebfc0ae5cc7ed6fa4866d"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.869945 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bdcd6ddbf-zq5br"] Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.870540 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2ng6x" event={"ID":"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02","Type":"ContainerStarted","Data":"de03583c32aad48dfc37ee383397d84749e210b8cf304401ee0e6aab7ad546dc"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.874150 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dbkgp" event={"ID":"13e3e69a-6af1-439d-a9e4-2295a6206492","Type":"ContainerStarted","Data":"7047e739ad5fc01d01f7d4e36bfa19297ac1088d7cadd29a219243d558c9a3e0"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.875808 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2accf2e-9270-4b7b-ac0e-7062b53c7cda","Type":"ContainerStarted","Data":"5dee148751e28f6a51c0ceae9c3977434248817bd72762e464bd2f9073f521eb"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.877757 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf954ffc-8mhd9" event={"ID":"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00","Type":"ContainerStarted","Data":"528cdcba443b5dd612f21d1edc6d3023e16e3aa76fbf5750e7dd648e0aeef80e"} Feb 03 06:18:14 crc kubenswrapper[4872]: I0203 06:18:14.878544 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cr92l" event={"ID":"48007ee1-953a-42c7-9279-2f348eb7bffb","Type":"ContainerStarted","Data":"db0fe47f259262a173880cb54c848d3c93157a7667a78114b5ef7c34b52dcd61"} Feb 03 06:18:15 crc kubenswrapper[4872]: I0203 06:18:15.916279 4872 generic.go:334] "Generic (PLEG): container finished" podID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerID="ecc745e83931d0ed5fcba036ae7769fd7c18cede59e86365452cf3702ec3bb2e" exitCode=0 Feb 03 06:18:15 crc kubenswrapper[4872]: I0203 06:18:15.916487 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" event={"ID":"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad","Type":"ContainerDied","Data":"ecc745e83931d0ed5fcba036ae7769fd7c18cede59e86365452cf3702ec3bb2e"} Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.022805 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"9feb9c6e8b1b3f6a77c9b35e9716090cefe420fdaa86e22dd32efdfb0f2e415c"} Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.023219 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"8d3e407eee6a5ce5856e0b487d99bd21d60023041c1b3da5c7ec77b67931418b"} Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.048279 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2ng6x" event={"ID":"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02","Type":"ContainerStarted","Data":"c2f91145d7f0449d154facf6a75e2f36d42a0e100aa9fd88d8f42643eae1e00f"} Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.062356 4872 generic.go:334] "Generic (PLEG): container finished" podID="240760e5-c8de-4e7c-90c3-bf852301edea" containerID="912503fa9942fa73ab931eb4df97c3703c46621a61e5cc4bf5b6dd126e5e05b1" exitCode=0 Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.062419 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" event={"ID":"240760e5-c8de-4e7c-90c3-bf852301edea","Type":"ContainerDied","Data":"912503fa9942fa73ab931eb4df97c3703c46621a61e5cc4bf5b6dd126e5e05b1"} Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.078336 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdcd6ddbf-zq5br" event={"ID":"585fab82-3ee7-4833-9e6a-63d58e40867b","Type":"ContainerStarted","Data":"7ad08ce3954e6f77e6482b13a5c57f3d0aa2d25132cac9fcb230094ede4a6dd0"} Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.089469 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-95dn6" event={"ID":"9ca50ee1-d592-41c3-869f-480e7d3d02f8","Type":"ContainerStarted","Data":"c83d2ff3724bd518c1a1b53e81f3ee27c447946dfae3855a6c7384764017fa72"} Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.091562 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2ng6x" podStartSLOduration=5.091543087 podStartE2EDuration="5.091543087s" podCreationTimestamp="2026-02-03 06:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:18:16.07467036 +0000 UTC m=+1066.657361774" watchObservedRunningTime="2026-02-03 06:18:16.091543087 +0000 UTC m=+1066.674234501" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.715136 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.747432 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-95dn6" podStartSLOduration=5.7474136510000005 podStartE2EDuration="5.747413651s" podCreationTimestamp="2026-02-03 06:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:18:16.170560472 +0000 UTC m=+1066.753251886" watchObservedRunningTime="2026-02-03 06:18:16.747413651 +0000 UTC m=+1067.330105065" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.881757 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-config\") pod \"240760e5-c8de-4e7c-90c3-bf852301edea\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.881849 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-nb\") pod \"240760e5-c8de-4e7c-90c3-bf852301edea\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.881918 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r8gf\" (UniqueName: \"kubernetes.io/projected/240760e5-c8de-4e7c-90c3-bf852301edea-kube-api-access-8r8gf\") pod \"240760e5-c8de-4e7c-90c3-bf852301edea\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.881999 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-dns-svc\") pod \"240760e5-c8de-4e7c-90c3-bf852301edea\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.882048 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-sb\") pod \"240760e5-c8de-4e7c-90c3-bf852301edea\" (UID: \"240760e5-c8de-4e7c-90c3-bf852301edea\") " Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.915994 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240760e5-c8de-4e7c-90c3-bf852301edea-kube-api-access-8r8gf" (OuterVolumeSpecName: "kube-api-access-8r8gf") pod "240760e5-c8de-4e7c-90c3-bf852301edea" (UID: "240760e5-c8de-4e7c-90c3-bf852301edea"). InnerVolumeSpecName "kube-api-access-8r8gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.930550 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "240760e5-c8de-4e7c-90c3-bf852301edea" (UID: "240760e5-c8de-4e7c-90c3-bf852301edea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.932326 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "240760e5-c8de-4e7c-90c3-bf852301edea" (UID: "240760e5-c8de-4e7c-90c3-bf852301edea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.946123 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "240760e5-c8de-4e7c-90c3-bf852301edea" (UID: "240760e5-c8de-4e7c-90c3-bf852301edea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.949895 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-config" (OuterVolumeSpecName: "config") pod "240760e5-c8de-4e7c-90c3-bf852301edea" (UID: "240760e5-c8de-4e7c-90c3-bf852301edea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.983798 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.983827 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.983836 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.983845 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r8gf\" (UniqueName: \"kubernetes.io/projected/240760e5-c8de-4e7c-90c3-bf852301edea-kube-api-access-8r8gf\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:16 crc kubenswrapper[4872]: I0203 06:18:16.983854 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/240760e5-c8de-4e7c-90c3-bf852301edea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.143962 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"6d54be79d580d531adc1529bf29da07f141640a203ae3927ba00c5c5d640bfb0"} Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.144025 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53916dd7-8beb-48bb-8689-5693b2b3cf6f","Type":"ContainerStarted","Data":"7023bb337e94a9d3fc34822c56ea7495f656be5d7d093ae8e7d896a1441a5910"} Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.150994 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" event={"ID":"240760e5-c8de-4e7c-90c3-bf852301edea","Type":"ContainerDied","Data":"ebcc49101a112f000cd14032f5ac1f2baa8adb13d9bebfc0ae5cc7ed6fa4866d"} Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.151063 4872 scope.go:117] "RemoveContainer" containerID="912503fa9942fa73ab931eb4df97c3703c46621a61e5cc4bf5b6dd126e5e05b1" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.151212 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-4xgqw" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.160344 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" event={"ID":"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad","Type":"ContainerStarted","Data":"c080d477e586c7fb831c1f251d390595e19f824c66c2ebe3dc774e62756a4387"} Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.160379 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.193677 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.38485621 podStartE2EDuration="1m11.193659042s" podCreationTimestamp="2026-02-03 06:17:06 +0000 UTC" firstStartedPulling="2026-02-03 06:17:40.077793039 +0000 UTC m=+1030.660484453" lastFinishedPulling="2026-02-03 06:18:14.886595871 +0000 UTC m=+1065.469287285" observedRunningTime="2026-02-03 06:18:17.189343418 +0000 UTC m=+1067.772034822" watchObservedRunningTime="2026-02-03 06:18:17.193659042 +0000 UTC m=+1067.776350456" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.268104 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" podStartSLOduration=6.268085557 podStartE2EDuration="6.268085557s" podCreationTimestamp="2026-02-03 06:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:18:17.228425661 +0000 UTC m=+1067.811117095" watchObservedRunningTime="2026-02-03 06:18:17.268085557 +0000 UTC m=+1067.850776961" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.309870 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-4xgqw"] Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.320333 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-4xgqw"] Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.602086 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-sqnd6"] Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.677649 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rcls4"] Feb 03 06:18:17 crc kubenswrapper[4872]: E0203 06:18:17.677975 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240760e5-c8de-4e7c-90c3-bf852301edea" containerName="init" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.677990 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="240760e5-c8de-4e7c-90c3-bf852301edea" containerName="init" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.678166 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="240760e5-c8de-4e7c-90c3-bf852301edea" containerName="init" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.678900 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.682302 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.714728 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rcls4"] Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.810326 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glx2g\" (UniqueName: \"kubernetes.io/projected/d2597923-b44f-4e69-b47f-e842c036f91f-kube-api-access-glx2g\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.810370 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.810395 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.810432 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.810452 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.810716 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-config\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.918544 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glx2g\" (UniqueName: \"kubernetes.io/projected/d2597923-b44f-4e69-b47f-e842c036f91f-kube-api-access-glx2g\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.918621 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.918650 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.918708 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.918732 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.918807 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-config\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.919845 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-config\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.920511 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.921043 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.921649 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.922629 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-svc\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:17 crc kubenswrapper[4872]: I0203 06:18:17.961423 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glx2g\" (UniqueName: \"kubernetes.io/projected/d2597923-b44f-4e69-b47f-e842c036f91f-kube-api-access-glx2g\") pod \"dnsmasq-dns-cf78879c9-rcls4\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:18 crc kubenswrapper[4872]: I0203 06:18:18.001232 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:18 crc kubenswrapper[4872]: I0203 06:18:18.142454 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240760e5-c8de-4e7c-90c3-bf852301edea" path="/var/lib/kubelet/pods/240760e5-c8de-4e7c-90c3-bf852301edea/volumes" Feb 03 06:18:18 crc kubenswrapper[4872]: I0203 06:18:18.664299 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rcls4"] Feb 03 06:18:18 crc kubenswrapper[4872]: W0203 06:18:18.706930 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2597923_b44f_4e69_b47f_e842c036f91f.slice/crio-28f29f41008ae07a46a024a0cbd7e36ccfb6b974ab335a3a617e0830df5a9225 WatchSource:0}: Error finding container 28f29f41008ae07a46a024a0cbd7e36ccfb6b974ab335a3a617e0830df5a9225: Status 404 returned error can't find the container with id 28f29f41008ae07a46a024a0cbd7e36ccfb6b974ab335a3a617e0830df5a9225 Feb 03 06:18:19 crc kubenswrapper[4872]: I0203 06:18:19.188214 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" event={"ID":"d2597923-b44f-4e69-b47f-e842c036f91f","Type":"ContainerStarted","Data":"28f29f41008ae07a46a024a0cbd7e36ccfb6b974ab335a3a617e0830df5a9225"} Feb 03 06:18:19 crc kubenswrapper[4872]: I0203 06:18:19.188345 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerName="dnsmasq-dns" containerID="cri-o://c080d477e586c7fb831c1f251d390595e19f824c66c2ebe3dc774e62756a4387" gracePeriod=10 Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.233800 4872 generic.go:334] "Generic (PLEG): container finished" podID="d2597923-b44f-4e69-b47f-e842c036f91f" containerID="d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a" exitCode=0 Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.234863 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" event={"ID":"d2597923-b44f-4e69-b47f-e842c036f91f","Type":"ContainerDied","Data":"d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a"} Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.258827 4872 generic.go:334] "Generic (PLEG): container finished" podID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerID="c080d477e586c7fb831c1f251d390595e19f824c66c2ebe3dc774e62756a4387" exitCode=0 Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.258869 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" event={"ID":"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad","Type":"ContainerDied","Data":"c080d477e586c7fb831c1f251d390595e19f824c66c2ebe3dc774e62756a4387"} Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.363704 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69f5c58cfc-sgggw"] Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.383918 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b48d58c48-rvvcb"] Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.386860 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.392723 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.397820 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b48d58c48-rvvcb"] Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.476474 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-scripts\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.476539 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea66403d-a189-495c-9067-18571e929874-logs\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.476556 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-secret-key\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.476579 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-config-data\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.476600 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-tls-certs\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.476624 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsjt\" (UniqueName: \"kubernetes.io/projected/ea66403d-a189-495c-9067-18571e929874-kube-api-access-dgsjt\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.476679 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-combined-ca-bundle\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.487072 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bdcd6ddbf-zq5br"] Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.525180 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57dc94599b-bvf7j"] Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.544446 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.582296 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-combined-ca-bundle\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.582425 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-scripts\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.584851 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea66403d-a189-495c-9067-18571e929874-logs\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.584890 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-secret-key\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.584925 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-config-data\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.584954 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-tls-certs\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.584992 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsjt\" (UniqueName: \"kubernetes.io/projected/ea66403d-a189-495c-9067-18571e929874-kube-api-access-dgsjt\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.593136 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea66403d-a189-495c-9067-18571e929874-logs\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.593416 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-config-data\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.593696 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-scripts\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.594359 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-combined-ca-bundle\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.596932 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-secret-key\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.603113 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-tls-certs\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.617430 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsjt\" (UniqueName: \"kubernetes.io/projected/ea66403d-a189-495c-9067-18571e929874-kube-api-access-dgsjt\") pod \"horizon-6b48d58c48-rvvcb\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.642781 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57dc94599b-bvf7j"] Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.685992 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2jxn\" (UniqueName: \"kubernetes.io/projected/f475ab66-31e6-46da-ad2e-8e8279e33b68-kube-api-access-r2jxn\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.686033 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-combined-ca-bundle\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.686064 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f475ab66-31e6-46da-ad2e-8e8279e33b68-scripts\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.686087 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f475ab66-31e6-46da-ad2e-8e8279e33b68-logs\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.686119 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-horizon-secret-key\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.686145 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f475ab66-31e6-46da-ad2e-8e8279e33b68-config-data\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.686163 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-horizon-tls-certs\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.720232 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.793480 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2jxn\" (UniqueName: \"kubernetes.io/projected/f475ab66-31e6-46da-ad2e-8e8279e33b68-kube-api-access-r2jxn\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.793528 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-combined-ca-bundle\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.793555 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f475ab66-31e6-46da-ad2e-8e8279e33b68-scripts\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.793578 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f475ab66-31e6-46da-ad2e-8e8279e33b68-logs\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.793606 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-horizon-secret-key\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.793635 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f475ab66-31e6-46da-ad2e-8e8279e33b68-config-data\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.793655 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-horizon-tls-certs\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.796262 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f475ab66-31e6-46da-ad2e-8e8279e33b68-scripts\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.797124 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f475ab66-31e6-46da-ad2e-8e8279e33b68-logs\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.797998 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f475ab66-31e6-46da-ad2e-8e8279e33b68-config-data\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.812491 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-horizon-tls-certs\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.821114 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-combined-ca-bundle\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.826130 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f475ab66-31e6-46da-ad2e-8e8279e33b68-horizon-secret-key\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.833626 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2jxn\" (UniqueName: \"kubernetes.io/projected/f475ab66-31e6-46da-ad2e-8e8279e33b68-kube-api-access-r2jxn\") pod \"horizon-57dc94599b-bvf7j\" (UID: \"f475ab66-31e6-46da-ad2e-8e8279e33b68\") " pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:20 crc kubenswrapper[4872]: I0203 06:18:20.887081 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:26 crc kubenswrapper[4872]: I0203 06:18:26.332337 4872 generic.go:334] "Generic (PLEG): container finished" podID="3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" containerID="c2f91145d7f0449d154facf6a75e2f36d42a0e100aa9fd88d8f42643eae1e00f" exitCode=0 Feb 03 06:18:26 crc kubenswrapper[4872]: I0203 06:18:26.332853 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2ng6x" event={"ID":"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02","Type":"ContainerDied","Data":"c2f91145d7f0449d154facf6a75e2f36d42a0e100aa9fd88d8f42643eae1e00f"} Feb 03 06:18:27 crc kubenswrapper[4872]: I0203 06:18:27.430831 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: i/o timeout" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.564283 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.573799 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.663673 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-fernet-keys\") pod \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.663746 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-config-data\") pod \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.663829 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpfxr\" (UniqueName: \"kubernetes.io/projected/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-kube-api-access-bpfxr\") pod \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.663908 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-scripts\") pod \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.663935 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-nb\") pod \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.663966 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-sb\") pod \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.664008 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-credential-keys\") pod \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.664520 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-config\") pod \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.664569 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr25w\" (UniqueName: \"kubernetes.io/projected/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-kube-api-access-nr25w\") pod \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.664651 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-dns-svc\") pod \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\" (UID: \"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.664787 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-combined-ca-bundle\") pod \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\" (UID: \"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02\") " Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.669347 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" (UID: "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.669618 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" (UID: "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.671718 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-kube-api-access-bpfxr" (OuterVolumeSpecName: "kube-api-access-bpfxr") pod "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" (UID: "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad"). InnerVolumeSpecName "kube-api-access-bpfxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.672006 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-scripts" (OuterVolumeSpecName: "scripts") pod "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" (UID: "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.674342 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-kube-api-access-nr25w" (OuterVolumeSpecName: "kube-api-access-nr25w") pod "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" (UID: "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02"). InnerVolumeSpecName "kube-api-access-nr25w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.710925 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" (UID: "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.722322 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" (UID: "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.734925 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" (UID: "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.736869 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-config-data" (OuterVolumeSpecName: "config-data") pod "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" (UID: "3dfacfe1-1de1-4013-9e78-e5bbd9f33d02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.750300 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-config" (OuterVolumeSpecName: "config") pod "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" (UID: "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.759764 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" (UID: "d496bc0b-cbec-4cf6-8c9e-905619ebb8ad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769372 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769405 4872 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769416 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769426 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr25w\" (UniqueName: \"kubernetes.io/projected/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-kube-api-access-nr25w\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769438 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769446 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769454 4872 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769461 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769471 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpfxr\" (UniqueName: \"kubernetes.io/projected/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-kube-api-access-bpfxr\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769478 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: I0203 06:18:29.769486 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:29 crc kubenswrapper[4872]: E0203 06:18:29.947635 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 03 06:18:29 crc kubenswrapper[4872]: E0203 06:18:29.947943 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n575h64h668h66chfdh566h5b7h649h644h586h658h577h688h5chf7h5c7h556h574h5b5h6h54bh5bh76hd7h59fh644h58h5b5hffhd9h658hbbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwwtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2accf2e-9270-4b7b-ac0e-7062b53c7cda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.367242 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2ng6x" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.367237 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2ng6x" event={"ID":"3dfacfe1-1de1-4013-9e78-e5bbd9f33d02","Type":"ContainerDied","Data":"de03583c32aad48dfc37ee383397d84749e210b8cf304401ee0e6aab7ad546dc"} Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.367369 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de03583c32aad48dfc37ee383397d84749e210b8cf304401ee0e6aab7ad546dc" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.369540 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" event={"ID":"d496bc0b-cbec-4cf6-8c9e-905619ebb8ad","Type":"ContainerDied","Data":"baabba7fe4750c0dc1975c09e450ae563772c70f0f4f6bde9da16606e7b35aba"} Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.369575 4872 scope.go:117] "RemoveContainer" containerID="c080d477e586c7fb831c1f251d390595e19f824c66c2ebe3dc774e62756a4387" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.369612 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.398883 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-sqnd6"] Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.405750 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-sqnd6"] Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.776452 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2ng6x"] Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.783626 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2ng6x"] Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.869054 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-98455"] Feb 03 06:18:30 crc kubenswrapper[4872]: E0203 06:18:30.869802 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" containerName="keystone-bootstrap" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.869824 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" containerName="keystone-bootstrap" Feb 03 06:18:30 crc kubenswrapper[4872]: E0203 06:18:30.869843 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerName="init" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.869852 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerName="init" Feb 03 06:18:30 crc kubenswrapper[4872]: E0203 06:18:30.869869 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerName="dnsmasq-dns" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.869877 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerName="dnsmasq-dns" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.870082 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" containerName="keystone-bootstrap" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.870096 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerName="dnsmasq-dns" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.870779 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.874158 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.874182 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.874393 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.874555 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mb5hw" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.874660 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.878217 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-98455"] Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.989654 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-combined-ca-bundle\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.989793 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brlll\" (UniqueName: \"kubernetes.io/projected/574f0ecd-3f3a-448f-a958-9c606833ad00-kube-api-access-brlll\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.989820 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-fernet-keys\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.989865 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-scripts\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.989881 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-credential-keys\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:30 crc kubenswrapper[4872]: I0203 06:18:30.989907 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-config-data\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.091961 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brlll\" (UniqueName: \"kubernetes.io/projected/574f0ecd-3f3a-448f-a958-9c606833ad00-kube-api-access-brlll\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.092270 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-fernet-keys\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.092408 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-scripts\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.092508 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-credential-keys\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.092609 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-config-data\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.092800 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-combined-ca-bundle\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.099118 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-scripts\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.105442 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-combined-ca-bundle\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.105523 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-fernet-keys\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.106031 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-config-data\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.112330 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-credential-keys\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.118413 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brlll\" (UniqueName: \"kubernetes.io/projected/574f0ecd-3f3a-448f-a958-9c606833ad00-kube-api-access-brlll\") pod \"keystone-bootstrap-98455\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.204842 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.271120 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:18:31 crc kubenswrapper[4872]: I0203 06:18:31.271182 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:18:32 crc kubenswrapper[4872]: I0203 06:18:32.143002 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfacfe1-1de1-4013-9e78-e5bbd9f33d02" path="/var/lib/kubelet/pods/3dfacfe1-1de1-4013-9e78-e5bbd9f33d02/volumes" Feb 03 06:18:32 crc kubenswrapper[4872]: I0203 06:18:32.143986 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" path="/var/lib/kubelet/pods/d496bc0b-cbec-4cf6-8c9e-905619ebb8ad/volumes" Feb 03 06:18:32 crc kubenswrapper[4872]: I0203 06:18:32.432250 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-sqnd6" podUID="d496bc0b-cbec-4cf6-8c9e-905619ebb8ad" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: i/o timeout" Feb 03 06:18:44 crc kubenswrapper[4872]: E0203 06:18:44.220228 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 03 06:18:44 crc kubenswrapper[4872]: E0203 06:18:44.220919 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q47wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-dbkgp_openstack(13e3e69a-6af1-439d-a9e4-2295a6206492): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:18:44 crc kubenswrapper[4872]: E0203 06:18:44.222443 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-dbkgp" podUID="13e3e69a-6af1-439d-a9e4-2295a6206492" Feb 03 06:18:44 crc kubenswrapper[4872]: E0203 06:18:44.514047 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-dbkgp" podUID="13e3e69a-6af1-439d-a9e4-2295a6206492" Feb 03 06:18:44 crc kubenswrapper[4872]: I0203 06:18:44.841530 4872 scope.go:117] "RemoveContainer" containerID="ecc745e83931d0ed5fcba036ae7769fd7c18cede59e86365452cf3702ec3bb2e" Feb 03 06:18:44 crc kubenswrapper[4872]: E0203 06:18:44.842734 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 03 06:18:44 crc kubenswrapper[4872]: E0203 06:18:44.842917 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cr92l_openstack(48007ee1-953a-42c7-9279-2f348eb7bffb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:18:44 crc kubenswrapper[4872]: E0203 06:18:44.844084 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cr92l" podUID="48007ee1-953a-42c7-9279-2f348eb7bffb" Feb 03 06:18:45 crc kubenswrapper[4872]: E0203 06:18:45.524144 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-cr92l" podUID="48007ee1-953a-42c7-9279-2f348eb7bffb" Feb 03 06:18:46 crc kubenswrapper[4872]: E0203 06:18:46.091303 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 03 06:18:46 crc kubenswrapper[4872]: E0203 06:18:46.091625 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pc452,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-m54jj_openstack(9d670aec-b637-4fe6-b046-794d9628b49b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:18:46 crc kubenswrapper[4872]: E0203 06:18:46.092929 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-m54jj" podUID="9d670aec-b637-4fe6-b046-794d9628b49b" Feb 03 06:18:46 crc kubenswrapper[4872]: E0203 06:18:46.338591 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Feb 03 06:18:46 crc kubenswrapper[4872]: E0203 06:18:46.339123 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n575h64h668h66chfdh566h5b7h649h644h586h658h577h688h5chf7h5c7h556h574h5b5h6h54bh5bh76hd7h59fh644h58h5b5hffhd9h658hbbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwwtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f2accf2e-9270-4b7b-ac0e-7062b53c7cda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:18:46 crc kubenswrapper[4872]: I0203 06:18:46.573156 4872 generic.go:334] "Generic (PLEG): container finished" podID="5a290b0d-8a5a-426e-9561-9372ba41afb5" containerID="f8acfac2bb75cc05862d7541df57c2245ba35a80a74808ba4a40f6f6c0391ba6" exitCode=0 Feb 03 06:18:46 crc kubenswrapper[4872]: I0203 06:18:46.574773 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6j26n" event={"ID":"5a290b0d-8a5a-426e-9561-9372ba41afb5","Type":"ContainerDied","Data":"f8acfac2bb75cc05862d7541df57c2245ba35a80a74808ba4a40f6f6c0391ba6"} Feb 03 06:18:46 crc kubenswrapper[4872]: E0203 06:18:46.576359 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-m54jj" podUID="9d670aec-b637-4fe6-b046-794d9628b49b" Feb 03 06:18:46 crc kubenswrapper[4872]: I0203 06:18:46.903055 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57dc94599b-bvf7j"] Feb 03 06:18:46 crc kubenswrapper[4872]: W0203 06:18:46.908025 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf475ab66_31e6_46da_ad2e_8e8279e33b68.slice/crio-dec758d98eece496d7262f546f60dc7854c3bbc7dfb9401ca762aaf7db6ef1aa WatchSource:0}: Error finding container dec758d98eece496d7262f546f60dc7854c3bbc7dfb9401ca762aaf7db6ef1aa: Status 404 returned error can't find the container with id dec758d98eece496d7262f546f60dc7854c3bbc7dfb9401ca762aaf7db6ef1aa Feb 03 06:18:46 crc kubenswrapper[4872]: I0203 06:18:46.987364 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b48d58c48-rvvcb"] Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.029820 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-98455"] Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.590822 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf954ffc-8mhd9" event={"ID":"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00","Type":"ContainerStarted","Data":"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.591063 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf954ffc-8mhd9" event={"ID":"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00","Type":"ContainerStarted","Data":"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.591176 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bf954ffc-8mhd9" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerName="horizon-log" containerID="cri-o://2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5" gracePeriod=30 Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.591542 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bf954ffc-8mhd9" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerName="horizon" containerID="cri-o://2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18" gracePeriod=30 Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.594387 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57dc94599b-bvf7j" event={"ID":"f475ab66-31e6-46da-ad2e-8e8279e33b68","Type":"ContainerStarted","Data":"def8661c05bb4e9553cd79780f48ef13e8421f47cd09acd384a5bfeb8d63e1a4"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.594422 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57dc94599b-bvf7j" event={"ID":"f475ab66-31e6-46da-ad2e-8e8279e33b68","Type":"ContainerStarted","Data":"7438c98e4e72e1e73349ff60e5dba30155f6b8ba915dd96e2e0daa709e8c661d"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.594434 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57dc94599b-bvf7j" event={"ID":"f475ab66-31e6-46da-ad2e-8e8279e33b68","Type":"ContainerStarted","Data":"dec758d98eece496d7262f546f60dc7854c3bbc7dfb9401ca762aaf7db6ef1aa"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.606378 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b48d58c48-rvvcb" event={"ID":"ea66403d-a189-495c-9067-18571e929874","Type":"ContainerStarted","Data":"555391f6e6812ef4d7e698df432a3cb13becfd2fc7c2de23c36f6678fe7447f6"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.606414 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b48d58c48-rvvcb" event={"ID":"ea66403d-a189-495c-9067-18571e929874","Type":"ContainerStarted","Data":"b866d9fc6c5d934b5d2595150c2a4394b57c67415e70621779928a6a8aeaf6db"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.609788 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bdcd6ddbf-zq5br" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerName="horizon-log" containerID="cri-o://f3678e68db7c1dbcfdf96c5b4c7d8d4da1a11538f6f43d1bc7a37114cee8dc40" gracePeriod=30 Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.609882 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bdcd6ddbf-zq5br" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerName="horizon" containerID="cri-o://c4064763511a1e35c5acf78b13964a62a35f00d469db86649c646c0c9ad31477" gracePeriod=30 Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.609857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdcd6ddbf-zq5br" event={"ID":"585fab82-3ee7-4833-9e6a-63d58e40867b","Type":"ContainerStarted","Data":"c4064763511a1e35c5acf78b13964a62a35f00d469db86649c646c0c9ad31477"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.609936 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdcd6ddbf-zq5br" event={"ID":"585fab82-3ee7-4833-9e6a-63d58e40867b","Type":"ContainerStarted","Data":"f3678e68db7c1dbcfdf96c5b4c7d8d4da1a11538f6f43d1bc7a37114cee8dc40"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.612756 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-98455" event={"ID":"574f0ecd-3f3a-448f-a958-9c606833ad00","Type":"ContainerStarted","Data":"5bf5bc48e219b91be187fb1e5a8a28f87f74da986ff2a591b8e5591bdcd0a68e"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.612799 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-98455" event={"ID":"574f0ecd-3f3a-448f-a958-9c606833ad00","Type":"ContainerStarted","Data":"4f4ae9a5f2a9d4db282090e91512b17d2aa7841f80f07b5712f11b05eae01ed0"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.625863 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bf954ffc-8mhd9" podStartSLOduration=4.643492842 podStartE2EDuration="36.625845497s" podCreationTimestamp="2026-02-03 06:18:11 +0000 UTC" firstStartedPulling="2026-02-03 06:18:14.408283997 +0000 UTC m=+1064.990975411" lastFinishedPulling="2026-02-03 06:18:46.390636652 +0000 UTC m=+1096.973328066" observedRunningTime="2026-02-03 06:18:47.615358394 +0000 UTC m=+1098.198049808" watchObservedRunningTime="2026-02-03 06:18:47.625845497 +0000 UTC m=+1098.208536901" Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.634624 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" event={"ID":"d2597923-b44f-4e69-b47f-e842c036f91f","Type":"ContainerStarted","Data":"9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.635059 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.648409 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57dc94599b-bvf7j" podStartSLOduration=27.648390001 podStartE2EDuration="27.648390001s" podCreationTimestamp="2026-02-03 06:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:18:47.642385106 +0000 UTC m=+1098.225076520" watchObservedRunningTime="2026-02-03 06:18:47.648390001 +0000 UTC m=+1098.231081415" Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.654961 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69f5c58cfc-sgggw" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerName="horizon-log" containerID="cri-o://1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a" gracePeriod=30 Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.655222 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5c58cfc-sgggw" event={"ID":"1e832ac8-6556-46fd-88a6-b2ebc386cc14","Type":"ContainerStarted","Data":"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.655244 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5c58cfc-sgggw" event={"ID":"1e832ac8-6556-46fd-88a6-b2ebc386cc14","Type":"ContainerStarted","Data":"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a"} Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.655272 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69f5c58cfc-sgggw" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerName="horizon" containerID="cri-o://2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8" gracePeriod=30 Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.675429 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-98455" podStartSLOduration=17.675406772 podStartE2EDuration="17.675406772s" podCreationTimestamp="2026-02-03 06:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:18:47.667522573 +0000 UTC m=+1098.250213987" watchObservedRunningTime="2026-02-03 06:18:47.675406772 +0000 UTC m=+1098.258098196" Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.725513 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69f5c58cfc-sgggw" podStartSLOduration=3.56043944 podStartE2EDuration="35.72549544s" podCreationTimestamp="2026-02-03 06:18:12 +0000 UTC" firstStartedPulling="2026-02-03 06:18:14.225995802 +0000 UTC m=+1064.808687216" lastFinishedPulling="2026-02-03 06:18:46.391051812 +0000 UTC m=+1096.973743216" observedRunningTime="2026-02-03 06:18:47.722972189 +0000 UTC m=+1098.305663603" watchObservedRunningTime="2026-02-03 06:18:47.72549544 +0000 UTC m=+1098.308186864" Feb 03 06:18:47 crc kubenswrapper[4872]: I0203 06:18:47.728311 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bdcd6ddbf-zq5br" podStartSLOduration=3.578060919 podStartE2EDuration="34.728301328s" podCreationTimestamp="2026-02-03 06:18:13 +0000 UTC" firstStartedPulling="2026-02-03 06:18:14.886606451 +0000 UTC m=+1065.469297865" lastFinishedPulling="2026-02-03 06:18:46.03684686 +0000 UTC m=+1096.619538274" observedRunningTime="2026-02-03 06:18:47.701168454 +0000 UTC m=+1098.283859868" watchObservedRunningTime="2026-02-03 06:18:47.728301328 +0000 UTC m=+1098.310992742" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.361543 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6j26n" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.409009 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" podStartSLOduration=31.408988621 podStartE2EDuration="31.408988621s" podCreationTimestamp="2026-02-03 06:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:18:47.751995949 +0000 UTC m=+1098.334687363" watchObservedRunningTime="2026-02-03 06:18:48.408988621 +0000 UTC m=+1098.991680025" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.463591 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk4h6\" (UniqueName: \"kubernetes.io/projected/5a290b0d-8a5a-426e-9561-9372ba41afb5-kube-api-access-pk4h6\") pod \"5a290b0d-8a5a-426e-9561-9372ba41afb5\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.463793 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-config-data\") pod \"5a290b0d-8a5a-426e-9561-9372ba41afb5\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.463896 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-db-sync-config-data\") pod \"5a290b0d-8a5a-426e-9561-9372ba41afb5\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.463974 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-combined-ca-bundle\") pod \"5a290b0d-8a5a-426e-9561-9372ba41afb5\" (UID: \"5a290b0d-8a5a-426e-9561-9372ba41afb5\") " Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.470039 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a290b0d-8a5a-426e-9561-9372ba41afb5-kube-api-access-pk4h6" (OuterVolumeSpecName: "kube-api-access-pk4h6") pod "5a290b0d-8a5a-426e-9561-9372ba41afb5" (UID: "5a290b0d-8a5a-426e-9561-9372ba41afb5"). InnerVolumeSpecName "kube-api-access-pk4h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.488898 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5a290b0d-8a5a-426e-9561-9372ba41afb5" (UID: "5a290b0d-8a5a-426e-9561-9372ba41afb5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.537830 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a290b0d-8a5a-426e-9561-9372ba41afb5" (UID: "5a290b0d-8a5a-426e-9561-9372ba41afb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.566367 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.566586 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk4h6\" (UniqueName: \"kubernetes.io/projected/5a290b0d-8a5a-426e-9561-9372ba41afb5-kube-api-access-pk4h6\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.566597 4872 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.577552 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-config-data" (OuterVolumeSpecName: "config-data") pod "5a290b0d-8a5a-426e-9561-9372ba41afb5" (UID: "5a290b0d-8a5a-426e-9561-9372ba41afb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.668055 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a290b0d-8a5a-426e-9561-9372ba41afb5-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.668987 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b48d58c48-rvvcb" event={"ID":"ea66403d-a189-495c-9067-18571e929874","Type":"ContainerStarted","Data":"534ee65dd0168b081759c8c62509c1a653cf9038605da25240365de5b1fdcb4c"} Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.674017 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6j26n" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.681594 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6j26n" event={"ID":"5a290b0d-8a5a-426e-9561-9372ba41afb5","Type":"ContainerDied","Data":"332c4df04ca1ad02bcf1d982e9c6327eaca3db106ebe7d8d888d9e65f2569d5b"} Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.681621 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332c4df04ca1ad02bcf1d982e9c6327eaca3db106ebe7d8d888d9e65f2569d5b" Feb 03 06:18:48 crc kubenswrapper[4872]: I0203 06:18:48.710124 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b48d58c48-rvvcb" podStartSLOduration=28.710101222 podStartE2EDuration="28.710101222s" podCreationTimestamp="2026-02-03 06:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:18:48.696014192 +0000 UTC m=+1099.278705606" watchObservedRunningTime="2026-02-03 06:18:48.710101222 +0000 UTC m=+1099.292792646" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.250570 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rcls4"] Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.317585 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-gjzcr"] Feb 03 06:18:49 crc kubenswrapper[4872]: E0203 06:18:49.318072 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a290b0d-8a5a-426e-9561-9372ba41afb5" containerName="glance-db-sync" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.318135 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a290b0d-8a5a-426e-9561-9372ba41afb5" containerName="glance-db-sync" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.318405 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a290b0d-8a5a-426e-9561-9372ba41afb5" containerName="glance-db-sync" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.319278 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.406147 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28gj7\" (UniqueName: \"kubernetes.io/projected/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-kube-api-access-28gj7\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.406409 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.406569 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.406631 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.406745 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-config\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.406794 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.508090 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28gj7\" (UniqueName: \"kubernetes.io/projected/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-kube-api-access-28gj7\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.508155 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.508187 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.508214 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.508253 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-config\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.508279 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.509118 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.509282 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.509677 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.510130 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.510231 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-config\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.513078 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-gjzcr"] Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.568504 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28gj7\" (UniqueName: \"kubernetes.io/projected/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-kube-api-access-28gj7\") pod \"dnsmasq-dns-56df8fb6b7-gjzcr\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.638837 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:49 crc kubenswrapper[4872]: I0203 06:18:49.688554 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" podUID="d2597923-b44f-4e69-b47f-e842c036f91f" containerName="dnsmasq-dns" containerID="cri-o://9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e" gracePeriod=10 Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.350084 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-gjzcr"] Feb 03 06:18:50 crc kubenswrapper[4872]: W0203 06:18:50.355619 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3053af6_1e44_4aee_8ea5_0bcbcc6ee8ef.slice/crio-b4d5936f32ed0d48f1e91c54cf40dade35fd13b2fdaef02a893a0e347bd7a5be WatchSource:0}: Error finding container b4d5936f32ed0d48f1e91c54cf40dade35fd13b2fdaef02a893a0e347bd7a5be: Status 404 returned error can't find the container with id b4d5936f32ed0d48f1e91c54cf40dade35fd13b2fdaef02a893a0e347bd7a5be Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.406151 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.408154 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.416611 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.419051 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.419297 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xv67c" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.419419 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.536014 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-config-data\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.536068 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.536098 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-scripts\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.536131 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lhkt\" (UniqueName: \"kubernetes.io/projected/7d063e20-3fc3-4024-a51a-9e5a37c2d177-kube-api-access-8lhkt\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.536154 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.536208 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-logs\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.536281 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.615216 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.616718 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.645712 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.647180 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lhkt\" (UniqueName: \"kubernetes.io/projected/7d063e20-3fc3-4024-a51a-9e5a37c2d177-kube-api-access-8lhkt\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.647215 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.647265 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-logs\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.647327 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.647356 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-config-data\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.647378 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.647404 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-scripts\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.648766 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-logs\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.649078 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.649300 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.653503 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-scripts\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.657933 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.667779 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-config-data\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.683094 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.687522 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lhkt\" (UniqueName: \"kubernetes.io/projected/7d063e20-3fc3-4024-a51a-9e5a37c2d177-kube-api-access-8lhkt\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.711087 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.718036 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " pod="openstack/glance-default-external-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.718408 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" event={"ID":"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef","Type":"ContainerStarted","Data":"b4d5936f32ed0d48f1e91c54cf40dade35fd13b2fdaef02a893a0e347bd7a5be"} Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.725830 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.725885 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.744046 4872 generic.go:334] "Generic (PLEG): container finished" podID="d2597923-b44f-4e69-b47f-e842c036f91f" containerID="9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e" exitCode=0 Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.744101 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" event={"ID":"d2597923-b44f-4e69-b47f-e842c036f91f","Type":"ContainerDied","Data":"9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e"} Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.744276 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" event={"ID":"d2597923-b44f-4e69-b47f-e842c036f91f","Type":"ContainerDied","Data":"28f29f41008ae07a46a024a0cbd7e36ccfb6b974ab335a3a617e0830df5a9225"} Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.744298 4872 scope.go:117] "RemoveContainer" containerID="9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.744170 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-rcls4" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748141 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glx2g\" (UniqueName: \"kubernetes.io/projected/d2597923-b44f-4e69-b47f-e842c036f91f-kube-api-access-glx2g\") pod \"d2597923-b44f-4e69-b47f-e842c036f91f\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748194 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-config\") pod \"d2597923-b44f-4e69-b47f-e842c036f91f\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748279 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-sb\") pod \"d2597923-b44f-4e69-b47f-e842c036f91f\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748348 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-svc\") pod \"d2597923-b44f-4e69-b47f-e842c036f91f\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748391 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-swift-storage-0\") pod \"d2597923-b44f-4e69-b47f-e842c036f91f\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748443 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-nb\") pod \"d2597923-b44f-4e69-b47f-e842c036f91f\" (UID: \"d2597923-b44f-4e69-b47f-e842c036f91f\") " Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748704 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748756 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vh5\" (UniqueName: \"kubernetes.io/projected/e3554454-7e8b-4f41-ba97-75595a3f10aa-kube-api-access-77vh5\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748808 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748856 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748898 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.748919 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.749661 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.770011 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2597923-b44f-4e69-b47f-e842c036f91f-kube-api-access-glx2g" (OuterVolumeSpecName: "kube-api-access-glx2g") pod "d2597923-b44f-4e69-b47f-e842c036f91f" (UID: "d2597923-b44f-4e69-b47f-e842c036f91f"). InnerVolumeSpecName "kube-api-access-glx2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.852835 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.852907 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.852931 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.852960 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.853038 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.853072 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vh5\" (UniqueName: \"kubernetes.io/projected/e3554454-7e8b-4f41-ba97-75595a3f10aa-kube-api-access-77vh5\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.853110 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.853176 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glx2g\" (UniqueName: \"kubernetes.io/projected/d2597923-b44f-4e69-b47f-e842c036f91f-kube-api-access-glx2g\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.854064 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.854251 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.854764 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-logs\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.864025 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2597923-b44f-4e69-b47f-e842c036f91f" (UID: "d2597923-b44f-4e69-b47f-e842c036f91f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.864403 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2597923-b44f-4e69-b47f-e842c036f91f" (UID: "d2597923-b44f-4e69-b47f-e842c036f91f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.867950 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.870808 4872 scope.go:117] "RemoveContainer" containerID="d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.884250 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.888883 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.889158 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.892192 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.892791 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vh5\" (UniqueName: \"kubernetes.io/projected/e3554454-7e8b-4f41-ba97-75595a3f10aa-kube-api-access-77vh5\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.924454 4872 scope.go:117] "RemoveContainer" containerID="9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e" Feb 03 06:18:50 crc kubenswrapper[4872]: E0203 06:18:50.925470 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e\": container with ID starting with 9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e not found: ID does not exist" containerID="9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.925501 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e"} err="failed to get container status \"9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e\": rpc error: code = NotFound desc = could not find container \"9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e\": container with ID starting with 9bf874d76b875db3300bf65efccc2b43ac94d4d3bd738bb87d051fc7b6b8829e not found: ID does not exist" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.925519 4872 scope.go:117] "RemoveContainer" containerID="d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a" Feb 03 06:18:50 crc kubenswrapper[4872]: E0203 06:18:50.931548 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a\": container with ID starting with d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a not found: ID does not exist" containerID="d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.931571 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a"} err="failed to get container status \"d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a\": rpc error: code = NotFound desc = could not find container \"d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a\": container with ID starting with d50a3d4766778ed173196b56a13038124a898422b6f42f5488116d32e7dc0d6a not found: ID does not exist" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.936408 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2597923-b44f-4e69-b47f-e842c036f91f" (UID: "d2597923-b44f-4e69-b47f-e842c036f91f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.945933 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2597923-b44f-4e69-b47f-e842c036f91f" (UID: "d2597923-b44f-4e69-b47f-e842c036f91f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.952944 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.954357 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-config" (OuterVolumeSpecName: "config") pod "d2597923-b44f-4e69-b47f-e842c036f91f" (UID: "d2597923-b44f-4e69-b47f-e842c036f91f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.955608 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.955902 4872 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.955922 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.955933 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.955942 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2597923-b44f-4e69-b47f-e842c036f91f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:50 crc kubenswrapper[4872]: I0203 06:18:50.987726 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:18:51 crc kubenswrapper[4872]: I0203 06:18:51.004149 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:18:51 crc kubenswrapper[4872]: I0203 06:18:51.126892 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rcls4"] Feb 03 06:18:51 crc kubenswrapper[4872]: I0203 06:18:51.165173 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-rcls4"] Feb 03 06:18:51 crc kubenswrapper[4872]: I0203 06:18:51.721957 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:18:51 crc kubenswrapper[4872]: I0203 06:18:51.772430 4872 generic.go:334] "Generic (PLEG): container finished" podID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" containerID="3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86" exitCode=0 Feb 03 06:18:51 crc kubenswrapper[4872]: I0203 06:18:51.772485 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" event={"ID":"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef","Type":"ContainerDied","Data":"3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86"} Feb 03 06:18:51 crc kubenswrapper[4872]: I0203 06:18:51.886797 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:18:51 crc kubenswrapper[4872]: I0203 06:18:51.962889 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:18:52 crc kubenswrapper[4872]: I0203 06:18:52.182885 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2597923-b44f-4e69-b47f-e842c036f91f" path="/var/lib/kubelet/pods/d2597923-b44f-4e69-b47f-e842c036f91f/volumes" Feb 03 06:18:52 crc kubenswrapper[4872]: I0203 06:18:52.736648 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:18:52 crc kubenswrapper[4872]: I0203 06:18:52.808636 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e3554454-7e8b-4f41-ba97-75595a3f10aa","Type":"ContainerStarted","Data":"4aa929bd2093672fb20e1fa5f6623ba44913f9362b46850d864270fd371422b7"} Feb 03 06:18:52 crc kubenswrapper[4872]: I0203 06:18:52.811055 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" event={"ID":"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef","Type":"ContainerStarted","Data":"2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328"} Feb 03 06:18:52 crc kubenswrapper[4872]: I0203 06:18:52.811562 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:52 crc kubenswrapper[4872]: I0203 06:18:52.814277 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7d063e20-3fc3-4024-a51a-9e5a37c2d177","Type":"ContainerStarted","Data":"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8"} Feb 03 06:18:52 crc kubenswrapper[4872]: I0203 06:18:52.814316 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7d063e20-3fc3-4024-a51a-9e5a37c2d177","Type":"ContainerStarted","Data":"c7860474ce2654900fc16e08e6b343c86b4521d32146e86faffb708bbde9dc8c"} Feb 03 06:18:52 crc kubenswrapper[4872]: I0203 06:18:52.831872 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" podStartSLOduration=3.831856052 podStartE2EDuration="3.831856052s" podCreationTimestamp="2026-02-03 06:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:18:52.831029411 +0000 UTC m=+1103.413720855" watchObservedRunningTime="2026-02-03 06:18:52.831856052 +0000 UTC m=+1103.414547466" Feb 03 06:18:53 crc kubenswrapper[4872]: I0203 06:18:53.045482 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:18:53 crc kubenswrapper[4872]: I0203 06:18:53.156295 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:18:53 crc kubenswrapper[4872]: I0203 06:18:53.828661 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e3554454-7e8b-4f41-ba97-75595a3f10aa","Type":"ContainerStarted","Data":"58f14e7aff35fea7ee1929b8239c1bbb80141e55dc9e26d306bb678ac76248dc"} Feb 03 06:18:54 crc kubenswrapper[4872]: I0203 06:18:54.145764 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:18:54 crc kubenswrapper[4872]: I0203 06:18:54.836895 4872 generic.go:334] "Generic (PLEG): container finished" podID="574f0ecd-3f3a-448f-a958-9c606833ad00" containerID="5bf5bc48e219b91be187fb1e5a8a28f87f74da986ff2a591b8e5591bdcd0a68e" exitCode=0 Feb 03 06:18:54 crc kubenswrapper[4872]: I0203 06:18:54.837001 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-98455" event={"ID":"574f0ecd-3f3a-448f-a958-9c606833ad00","Type":"ContainerDied","Data":"5bf5bc48e219b91be187fb1e5a8a28f87f74da986ff2a591b8e5591bdcd0a68e"} Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.416125 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-98455" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.542784 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brlll\" (UniqueName: \"kubernetes.io/projected/574f0ecd-3f3a-448f-a958-9c606833ad00-kube-api-access-brlll\") pod \"574f0ecd-3f3a-448f-a958-9c606833ad00\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.542875 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-combined-ca-bundle\") pod \"574f0ecd-3f3a-448f-a958-9c606833ad00\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.542993 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-scripts\") pod \"574f0ecd-3f3a-448f-a958-9c606833ad00\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.543062 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-config-data\") pod \"574f0ecd-3f3a-448f-a958-9c606833ad00\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.543090 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-credential-keys\") pod \"574f0ecd-3f3a-448f-a958-9c606833ad00\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.543127 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-fernet-keys\") pod \"574f0ecd-3f3a-448f-a958-9c606833ad00\" (UID: \"574f0ecd-3f3a-448f-a958-9c606833ad00\") " Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.562236 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-scripts" (OuterVolumeSpecName: "scripts") pod "574f0ecd-3f3a-448f-a958-9c606833ad00" (UID: "574f0ecd-3f3a-448f-a958-9c606833ad00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.565606 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "574f0ecd-3f3a-448f-a958-9c606833ad00" (UID: "574f0ecd-3f3a-448f-a958-9c606833ad00"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.570885 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574f0ecd-3f3a-448f-a958-9c606833ad00-kube-api-access-brlll" (OuterVolumeSpecName: "kube-api-access-brlll") pod "574f0ecd-3f3a-448f-a958-9c606833ad00" (UID: "574f0ecd-3f3a-448f-a958-9c606833ad00"). InnerVolumeSpecName "kube-api-access-brlll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.571968 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "574f0ecd-3f3a-448f-a958-9c606833ad00" (UID: "574f0ecd-3f3a-448f-a958-9c606833ad00"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.592306 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "574f0ecd-3f3a-448f-a958-9c606833ad00" (UID: "574f0ecd-3f3a-448f-a958-9c606833ad00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.640834 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.644981 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.645011 4872 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.645020 4872 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.645030 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brlll\" (UniqueName: \"kubernetes.io/projected/574f0ecd-3f3a-448f-a958-9c606833ad00-kube-api-access-brlll\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.645038 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.647291 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-config-data" (OuterVolumeSpecName: "config-data") pod "574f0ecd-3f3a-448f-a958-9c606833ad00" (UID: "574f0ecd-3f3a-448f-a958-9c606833ad00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.727049 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cpkjz"] Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.727327 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" podUID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerName="dnsmasq-dns" containerID="cri-o://b357e097bab4dc87b01c855216d3fd633e15df64dc5a0c244d30e53718b98670" gracePeriod=10 Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.748408 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574f0ecd-3f3a-448f-a958-9c606833ad00-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.899940 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-98455" event={"ID":"574f0ecd-3f3a-448f-a958-9c606833ad00","Type":"ContainerDied","Data":"4f4ae9a5f2a9d4db282090e91512b17d2aa7841f80f07b5712f11b05eae01ed0"} Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.900208 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4ae9a5f2a9d4db282090e91512b17d2aa7841f80f07b5712f11b05eae01ed0" Feb 03 06:18:59 crc kubenswrapper[4872]: I0203 06:18:59.900278 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-98455" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.549926 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6447fd6947-kkqrr"] Feb 03 06:19:00 crc kubenswrapper[4872]: E0203 06:19:00.552180 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2597923-b44f-4e69-b47f-e842c036f91f" containerName="init" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.552250 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2597923-b44f-4e69-b47f-e842c036f91f" containerName="init" Feb 03 06:19:00 crc kubenswrapper[4872]: E0203 06:19:00.552299 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574f0ecd-3f3a-448f-a958-9c606833ad00" containerName="keystone-bootstrap" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.552352 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="574f0ecd-3f3a-448f-a958-9c606833ad00" containerName="keystone-bootstrap" Feb 03 06:19:00 crc kubenswrapper[4872]: E0203 06:19:00.552422 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2597923-b44f-4e69-b47f-e842c036f91f" containerName="dnsmasq-dns" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.552469 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2597923-b44f-4e69-b47f-e842c036f91f" containerName="dnsmasq-dns" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.552665 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2597923-b44f-4e69-b47f-e842c036f91f" containerName="dnsmasq-dns" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.552755 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="574f0ecd-3f3a-448f-a958-9c606833ad00" containerName="keystone-bootstrap" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.553326 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.557261 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.557474 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.557547 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mb5hw" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.557863 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.558136 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.558668 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.575607 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6447fd6947-kkqrr"] Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.650596 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" podUID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.667452 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-scripts\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.667592 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-combined-ca-bundle\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.667638 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-public-tls-certs\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.667806 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45qnj\" (UniqueName: \"kubernetes.io/projected/885d40a9-ca6e-4beb-9782-35099d10bf35-kube-api-access-45qnj\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.667882 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-internal-tls-certs\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.667910 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-credential-keys\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.667938 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-config-data\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.668000 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-fernet-keys\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.737965 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.771723 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-combined-ca-bundle\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.771785 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-public-tls-certs\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.771866 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45qnj\" (UniqueName: \"kubernetes.io/projected/885d40a9-ca6e-4beb-9782-35099d10bf35-kube-api-access-45qnj\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.771901 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-internal-tls-certs\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.771935 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-credential-keys\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.771954 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-config-data\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.771983 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-fernet-keys\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.772032 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-scripts\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.794722 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-public-tls-certs\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.796924 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-scripts\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.797224 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-credential-keys\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.797297 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-combined-ca-bundle\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.806235 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-internal-tls-certs\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.810660 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-config-data\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.811611 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45qnj\" (UniqueName: \"kubernetes.io/projected/885d40a9-ca6e-4beb-9782-35099d10bf35-kube-api-access-45qnj\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.816671 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/885d40a9-ca6e-4beb-9782-35099d10bf35-fernet-keys\") pod \"keystone-6447fd6947-kkqrr\" (UID: \"885d40a9-ca6e-4beb-9782-35099d10bf35\") " pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.887146 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.897374 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57dc94599b-bvf7j" podUID="f475ab66-31e6-46da-ad2e-8e8279e33b68" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.941581 4872 generic.go:334] "Generic (PLEG): container finished" podID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerID="b357e097bab4dc87b01c855216d3fd633e15df64dc5a0c244d30e53718b98670" exitCode=0 Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.942314 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" event={"ID":"a3433831-e142-4ec1-b8ce-dc1d064c3ffa","Type":"ContainerDied","Data":"b357e097bab4dc87b01c855216d3fd633e15df64dc5a0c244d30e53718b98670"} Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.952082 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e3554454-7e8b-4f41-ba97-75595a3f10aa","Type":"ContainerStarted","Data":"386fa088a523c0e37ca3db158ec75406a55e1fbee0529f5326749fcc249374d9"} Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.952596 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerName="glance-log" containerID="cri-o://58f14e7aff35fea7ee1929b8239c1bbb80141e55dc9e26d306bb678ac76248dc" gracePeriod=30 Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.953104 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerName="glance-httpd" containerID="cri-o://386fa088a523c0e37ca3db158ec75406a55e1fbee0529f5326749fcc249374d9" gracePeriod=30 Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.962660 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7d063e20-3fc3-4024-a51a-9e5a37c2d177","Type":"ContainerStarted","Data":"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e"} Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.963026 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerName="glance-log" containerID="cri-o://702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8" gracePeriod=30 Feb 03 06:19:00 crc kubenswrapper[4872]: I0203 06:19:00.963233 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerName="glance-httpd" containerID="cri-o://46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e" gracePeriod=30 Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.010515 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.010493786 podStartE2EDuration="12.010493786s" podCreationTimestamp="2026-02-03 06:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:00.981201999 +0000 UTC m=+1111.563893413" watchObservedRunningTime="2026-02-03 06:19:01.010493786 +0000 UTC m=+1111.593185200" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.024123 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.024100634 podStartE2EDuration="12.024100634s" podCreationTimestamp="2026-02-03 06:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:01.012785831 +0000 UTC m=+1111.595477245" watchObservedRunningTime="2026-02-03 06:19:01.024100634 +0000 UTC m=+1111.606792048" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.281824 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.282378 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.282419 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.283148 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ee0abf72dd022e7907c6192d8075ae69f194cd75ecc0b9f792ce2b1786381c8"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.283190 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://1ee0abf72dd022e7907c6192d8075ae69f194cd75ecc0b9f792ce2b1786381c8" gracePeriod=600 Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.721799 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6447fd6947-kkqrr"] Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.765348 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.817920 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.916765 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqmrs\" (UniqueName: \"kubernetes.io/projected/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-kube-api-access-pqmrs\") pod \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.916810 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-combined-ca-bundle\") pod \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.916842 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-httpd-run\") pod \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.916869 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lhkt\" (UniqueName: \"kubernetes.io/projected/7d063e20-3fc3-4024-a51a-9e5a37c2d177-kube-api-access-8lhkt\") pod \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.916904 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-nb\") pod \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.916928 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-config-data\") pod \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.916943 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-scripts\") pod \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.917004 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-logs\") pod \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.917036 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\" (UID: \"7d063e20-3fc3-4024-a51a-9e5a37c2d177\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.917083 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-config\") pod \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.917137 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-sb\") pod \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.917161 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-dns-svc\") pod \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.917675 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-logs" (OuterVolumeSpecName: "logs") pod "7d063e20-3fc3-4024-a51a-9e5a37c2d177" (UID: "7d063e20-3fc3-4024-a51a-9e5a37c2d177"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.918052 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.922785 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-scripts" (OuterVolumeSpecName: "scripts") pod "7d063e20-3fc3-4024-a51a-9e5a37c2d177" (UID: "7d063e20-3fc3-4024-a51a-9e5a37c2d177"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.923026 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7d063e20-3fc3-4024-a51a-9e5a37c2d177" (UID: "7d063e20-3fc3-4024-a51a-9e5a37c2d177"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.938215 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d063e20-3fc3-4024-a51a-9e5a37c2d177-kube-api-access-8lhkt" (OuterVolumeSpecName: "kube-api-access-8lhkt") pod "7d063e20-3fc3-4024-a51a-9e5a37c2d177" (UID: "7d063e20-3fc3-4024-a51a-9e5a37c2d177"). InnerVolumeSpecName "kube-api-access-8lhkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.939452 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "7d063e20-3fc3-4024-a51a-9e5a37c2d177" (UID: "7d063e20-3fc3-4024-a51a-9e5a37c2d177"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 06:19:01 crc kubenswrapper[4872]: I0203 06:19:01.960712 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-kube-api-access-pqmrs" (OuterVolumeSpecName: "kube-api-access-pqmrs") pod "a3433831-e142-4ec1-b8ce-dc1d064c3ffa" (UID: "a3433831-e142-4ec1-b8ce-dc1d064c3ffa"). InnerVolumeSpecName "kube-api-access-pqmrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.002892 4872 generic.go:334] "Generic (PLEG): container finished" podID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerID="46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e" exitCode=143 Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.009835 4872 generic.go:334] "Generic (PLEG): container finished" podID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerID="702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8" exitCode=143 Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.003186 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7d063e20-3fc3-4024-a51a-9e5a37c2d177","Type":"ContainerDied","Data":"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.009985 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7d063e20-3fc3-4024-a51a-9e5a37c2d177","Type":"ContainerDied","Data":"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.010001 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7d063e20-3fc3-4024-a51a-9e5a37c2d177","Type":"ContainerDied","Data":"c7860474ce2654900fc16e08e6b343c86b4521d32146e86faffb708bbde9dc8c"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.010019 4872 scope.go:117] "RemoveContainer" containerID="46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.003142 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.020957 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqmrs\" (UniqueName: \"kubernetes.io/projected/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-kube-api-access-pqmrs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.020982 4872 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d063e20-3fc3-4024-a51a-9e5a37c2d177-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.020993 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lhkt\" (UniqueName: \"kubernetes.io/projected/7d063e20-3fc3-4024-a51a-9e5a37c2d177-kube-api-access-8lhkt\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.021001 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.021023 4872 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.029235 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6447fd6947-kkqrr" event={"ID":"885d40a9-ca6e-4beb-9782-35099d10bf35","Type":"ContainerStarted","Data":"20a4def9b7d3a4b1abeb7da7f2bf1fefe6ba2987ed0b819ddf54fae369a32740"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.062042 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dbkgp" event={"ID":"13e3e69a-6af1-439d-a9e4-2295a6206492","Type":"ContainerStarted","Data":"5bf51ac0ecb2ab6a9284994742efdbe55ca56dd83ca85291b53d4d15f1e8268b"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.077898 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2accf2e-9270-4b7b-ac0e-7062b53c7cda","Type":"ContainerStarted","Data":"0577a59e8d68959f7192529bba8f9c5a74fb63bc21b9264f51791cbddd61af8a"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.086708 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cr92l" event={"ID":"48007ee1-953a-42c7-9279-2f348eb7bffb","Type":"ContainerStarted","Data":"7a05757675f2b606a0f56fa7b9c83a0b39f270b9553586945b17b42243f2d508"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.087723 4872 scope.go:117] "RemoveContainer" containerID="702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.103538 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dbkgp" podStartSLOduration=4.469347453 podStartE2EDuration="51.103523863s" podCreationTimestamp="2026-02-03 06:18:11 +0000 UTC" firstStartedPulling="2026-02-03 06:18:14.334114148 +0000 UTC m=+1064.916805562" lastFinishedPulling="2026-02-03 06:19:00.968290558 +0000 UTC m=+1111.550981972" observedRunningTime="2026-02-03 06:19:02.101645848 +0000 UTC m=+1112.684337252" watchObservedRunningTime="2026-02-03 06:19:02.103523863 +0000 UTC m=+1112.686215277" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.110653 4872 generic.go:334] "Generic (PLEG): container finished" podID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerID="386fa088a523c0e37ca3db158ec75406a55e1fbee0529f5326749fcc249374d9" exitCode=143 Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.110775 4872 generic.go:334] "Generic (PLEG): container finished" podID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerID="58f14e7aff35fea7ee1929b8239c1bbb80141e55dc9e26d306bb678ac76248dc" exitCode=143 Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.110884 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e3554454-7e8b-4f41-ba97-75595a3f10aa","Type":"ContainerDied","Data":"386fa088a523c0e37ca3db158ec75406a55e1fbee0529f5326749fcc249374d9"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.110958 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e3554454-7e8b-4f41-ba97-75595a3f10aa","Type":"ContainerDied","Data":"58f14e7aff35fea7ee1929b8239c1bbb80141e55dc9e26d306bb678ac76248dc"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.127750 4872 scope.go:117] "RemoveContainer" containerID="46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e" Feb 03 06:19:02 crc kubenswrapper[4872]: E0203 06:19:02.134513 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e\": container with ID starting with 46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e not found: ID does not exist" containerID="46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.134543 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e"} err="failed to get container status \"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e\": rpc error: code = NotFound desc = could not find container \"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e\": container with ID starting with 46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e not found: ID does not exist" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.134564 4872 scope.go:117] "RemoveContainer" containerID="702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8" Feb 03 06:19:02 crc kubenswrapper[4872]: E0203 06:19:02.147403 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8\": container with ID starting with 702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8 not found: ID does not exist" containerID="702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.147448 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8"} err="failed to get container status \"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8\": rpc error: code = NotFound desc = could not find container \"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8\": container with ID starting with 702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8 not found: ID does not exist" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.147473 4872 scope.go:117] "RemoveContainer" containerID="46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.148748 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3433831-e142-4ec1-b8ce-dc1d064c3ffa" (UID: "a3433831-e142-4ec1-b8ce-dc1d064c3ffa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.148756 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e"} err="failed to get container status \"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e\": rpc error: code = NotFound desc = could not find container \"46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e\": container with ID starting with 46a6be16d5e6537020c88853b193e41ba24075a4b8c5bd7f0ac771d4cb3a326e not found: ID does not exist" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.149168 4872 scope.go:117] "RemoveContainer" containerID="702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.150164 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8"} err="failed to get container status \"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8\": rpc error: code = NotFound desc = could not find container \"702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8\": container with ID starting with 702702ae6e149c8055448469e23654add6fb0a927e42420edefc67faa8e9b4e8 not found: ID does not exist" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.150362 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="1ee0abf72dd022e7907c6192d8075ae69f194cd75ecc0b9f792ce2b1786381c8" exitCode=0 Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.159191 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cr92l" podStartSLOduration=4.05267988 podStartE2EDuration="50.159171545s" podCreationTimestamp="2026-02-03 06:18:12 +0000 UTC" firstStartedPulling="2026-02-03 06:18:14.570758335 +0000 UTC m=+1065.153449749" lastFinishedPulling="2026-02-03 06:19:00.67725 +0000 UTC m=+1111.259941414" observedRunningTime="2026-02-03 06:19:02.134870028 +0000 UTC m=+1112.717561442" watchObservedRunningTime="2026-02-03 06:19:02.159171545 +0000 UTC m=+1112.741862959" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.166384 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"1ee0abf72dd022e7907c6192d8075ae69f194cd75ecc0b9f792ce2b1786381c8"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.169400 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"3f33fa16560568ffcc087b44ca5f1c955596c896bcd4662e8ab64b8586efed14"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.169483 4872 scope.go:117] "RemoveContainer" containerID="8daff91be8f2ae554bbad9124562735aa9d055c764b0bc522db54af803ed7992" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.171556 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" event={"ID":"a3433831-e142-4ec1-b8ce-dc1d064c3ffa","Type":"ContainerDied","Data":"23ce661d5a23143a978b73c97c33fec8acbf16cc6e8bcceebdeadc211831d327"} Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.171675 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cpkjz" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.225011 4872 scope.go:117] "RemoveContainer" containerID="b357e097bab4dc87b01c855216d3fd633e15df64dc5a0c244d30e53718b98670" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.226439 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.248009 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d063e20-3fc3-4024-a51a-9e5a37c2d177" (UID: "7d063e20-3fc3-4024-a51a-9e5a37c2d177"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.259861 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-config-data" (OuterVolumeSpecName: "config-data") pod "7d063e20-3fc3-4024-a51a-9e5a37c2d177" (UID: "7d063e20-3fc3-4024-a51a-9e5a37c2d177"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.300325 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3433831-e142-4ec1-b8ce-dc1d064c3ffa" (UID: "a3433831-e142-4ec1-b8ce-dc1d064c3ffa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.301671 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3433831-e142-4ec1-b8ce-dc1d064c3ffa" (UID: "a3433831-e142-4ec1-b8ce-dc1d064c3ffa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.316810 4872 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.337183 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-config" (OuterVolumeSpecName: "config") pod "a3433831-e142-4ec1-b8ce-dc1d064c3ffa" (UID: "a3433831-e142-4ec1-b8ce-dc1d064c3ffa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.337327 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-config\") pod \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\" (UID: \"a3433831-e142-4ec1-b8ce-dc1d064c3ffa\") " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.339307 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.339331 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d063e20-3fc3-4024-a51a-9e5a37c2d177-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.339343 4872 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.339357 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.339367 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: W0203 06:19:02.339658 4872 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a3433831-e142-4ec1-b8ce-dc1d064c3ffa/volumes/kubernetes.io~configmap/config Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.339673 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-config" (OuterVolumeSpecName: "config") pod "a3433831-e142-4ec1-b8ce-dc1d064c3ffa" (UID: "a3433831-e142-4ec1-b8ce-dc1d064c3ffa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.447412 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3433831-e142-4ec1-b8ce-dc1d064c3ffa-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.492743 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.505371 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.506182 4872 scope.go:117] "RemoveContainer" containerID="295e6a9491c1300120487148ce28c102be2d043742a71739aee77407642841fc" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.527871 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:19:02 crc kubenswrapper[4872]: E0203 06:19:02.528190 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerName="dnsmasq-dns" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.528201 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerName="dnsmasq-dns" Feb 03 06:19:02 crc kubenswrapper[4872]: E0203 06:19:02.528213 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerName="glance-httpd" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.528220 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerName="glance-httpd" Feb 03 06:19:02 crc kubenswrapper[4872]: E0203 06:19:02.528241 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerName="init" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.528247 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerName="init" Feb 03 06:19:02 crc kubenswrapper[4872]: E0203 06:19:02.528264 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerName="glance-log" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.528269 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerName="glance-log" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.528430 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" containerName="dnsmasq-dns" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.528453 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerName="glance-log" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.528479 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" containerName="glance-httpd" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.529293 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.540297 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.540522 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.573423 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.631747 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cpkjz"] Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.644743 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cpkjz"] Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.651038 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.651096 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.651129 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mwx\" (UniqueName: \"kubernetes.io/projected/c71f7574-20a8-448b-bb5e-af62f7311a08-kube-api-access-q6mwx\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.651153 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.651175 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-scripts\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.651202 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.651220 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-config-data\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.651237 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-logs\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.675939 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753224 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-httpd-run\") pod \"e3554454-7e8b-4f41-ba97-75595a3f10aa\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753274 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e3554454-7e8b-4f41-ba97-75595a3f10aa\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753345 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-combined-ca-bundle\") pod \"e3554454-7e8b-4f41-ba97-75595a3f10aa\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753389 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77vh5\" (UniqueName: \"kubernetes.io/projected/e3554454-7e8b-4f41-ba97-75595a3f10aa-kube-api-access-77vh5\") pod \"e3554454-7e8b-4f41-ba97-75595a3f10aa\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753425 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-scripts\") pod \"e3554454-7e8b-4f41-ba97-75595a3f10aa\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753498 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-logs\") pod \"e3554454-7e8b-4f41-ba97-75595a3f10aa\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753540 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-config-data\") pod \"e3554454-7e8b-4f41-ba97-75595a3f10aa\" (UID: \"e3554454-7e8b-4f41-ba97-75595a3f10aa\") " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753895 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-config-data\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.753939 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-logs\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.754060 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.754105 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.754129 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mwx\" (UniqueName: \"kubernetes.io/projected/c71f7574-20a8-448b-bb5e-af62f7311a08-kube-api-access-q6mwx\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.754153 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.754186 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-scripts\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.754209 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.754780 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.759125 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.761610 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e3554454-7e8b-4f41-ba97-75595a3f10aa" (UID: "e3554454-7e8b-4f41-ba97-75595a3f10aa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.764025 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-logs" (OuterVolumeSpecName: "logs") pod "e3554454-7e8b-4f41-ba97-75595a3f10aa" (UID: "e3554454-7e8b-4f41-ba97-75595a3f10aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.764238 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-logs\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.793450 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mwx\" (UniqueName: \"kubernetes.io/projected/c71f7574-20a8-448b-bb5e-af62f7311a08-kube-api-access-q6mwx\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.812705 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-scripts\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.813198 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.814467 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.814874 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3554454-7e8b-4f41-ba97-75595a3f10aa-kube-api-access-77vh5" (OuterVolumeSpecName: "kube-api-access-77vh5") pod "e3554454-7e8b-4f41-ba97-75595a3f10aa" (UID: "e3554454-7e8b-4f41-ba97-75595a3f10aa"). InnerVolumeSpecName "kube-api-access-77vh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.814970 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-scripts" (OuterVolumeSpecName: "scripts") pod "e3554454-7e8b-4f41-ba97-75595a3f10aa" (UID: "e3554454-7e8b-4f41-ba97-75595a3f10aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.815024 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "e3554454-7e8b-4f41-ba97-75595a3f10aa" (UID: "e3554454-7e8b-4f41-ba97-75595a3f10aa"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.834320 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-config-data\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.855675 4872 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.859756 4872 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.859890 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77vh5\" (UniqueName: \"kubernetes.io/projected/e3554454-7e8b-4f41-ba97-75595a3f10aa-kube-api-access-77vh5\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.859957 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.860022 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3554454-7e8b-4f41-ba97-75595a3f10aa-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.880865 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " pod="openstack/glance-default-external-api-0" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.882963 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3554454-7e8b-4f41-ba97-75595a3f10aa" (UID: "e3554454-7e8b-4f41-ba97-75595a3f10aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.883586 4872 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.915828 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-config-data" (OuterVolumeSpecName: "config-data") pod "e3554454-7e8b-4f41-ba97-75595a3f10aa" (UID: "e3554454-7e8b-4f41-ba97-75595a3f10aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.961183 4872 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.961448 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:02 crc kubenswrapper[4872]: I0203 06:19:02.961530 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3554454-7e8b-4f41-ba97-75595a3f10aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.148663 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.193966 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6447fd6947-kkqrr" event={"ID":"885d40a9-ca6e-4beb-9782-35099d10bf35","Type":"ContainerStarted","Data":"2b89cd10bd429d872507d22a9f4185dcf56676f7cf7e204c963993abb0efd681"} Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.194074 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.212029 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e3554454-7e8b-4f41-ba97-75595a3f10aa","Type":"ContainerDied","Data":"4aa929bd2093672fb20e1fa5f6623ba44913f9362b46850d864270fd371422b7"} Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.212075 4872 scope.go:117] "RemoveContainer" containerID="386fa088a523c0e37ca3db158ec75406a55e1fbee0529f5326749fcc249374d9" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.212103 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.223061 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6447fd6947-kkqrr" podStartSLOduration=3.223048039 podStartE2EDuration="3.223048039s" podCreationTimestamp="2026-02-03 06:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:03.220043615 +0000 UTC m=+1113.802735029" watchObservedRunningTime="2026-02-03 06:19:03.223048039 +0000 UTC m=+1113.805739453" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.256468 4872 scope.go:117] "RemoveContainer" containerID="58f14e7aff35fea7ee1929b8239c1bbb80141e55dc9e26d306bb678ac76248dc" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.341903 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.377868 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.426346 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:19:03 crc kubenswrapper[4872]: E0203 06:19:03.436793 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerName="glance-httpd" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.436831 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerName="glance-httpd" Feb 03 06:19:03 crc kubenswrapper[4872]: E0203 06:19:03.436875 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerName="glance-log" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.436884 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerName="glance-log" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.447224 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerName="glance-log" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.447289 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" containerName="glance-httpd" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.457628 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.457919 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.465103 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.465812 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.577305 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.577612 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.577657 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.577717 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.577746 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.577826 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.577845 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.578229 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24jm9\" (UniqueName: \"kubernetes.io/projected/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-kube-api-access-24jm9\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.680022 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.680067 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.680133 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.680168 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.680226 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24jm9\" (UniqueName: \"kubernetes.io/projected/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-kube-api-access-24jm9\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.680278 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.680299 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.680348 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.681382 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.681607 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-logs\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.681995 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.691703 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.705046 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.706610 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.707333 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.712806 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24jm9\" (UniqueName: \"kubernetes.io/projected/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-kube-api-access-24jm9\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.731628 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.789646 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:03 crc kubenswrapper[4872]: I0203 06:19:03.977942 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:19:04 crc kubenswrapper[4872]: W0203 06:19:04.005055 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc71f7574_20a8_448b_bb5e_af62f7311a08.slice/crio-0a1af5817a9684688536c45699f8b8607c6b14476a652fbdae59929456738a45 WatchSource:0}: Error finding container 0a1af5817a9684688536c45699f8b8607c6b14476a652fbdae59929456738a45: Status 404 returned error can't find the container with id 0a1af5817a9684688536c45699f8b8607c6b14476a652fbdae59929456738a45 Feb 03 06:19:04 crc kubenswrapper[4872]: I0203 06:19:04.136304 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d063e20-3fc3-4024-a51a-9e5a37c2d177" path="/var/lib/kubelet/pods/7d063e20-3fc3-4024-a51a-9e5a37c2d177/volumes" Feb 03 06:19:04 crc kubenswrapper[4872]: I0203 06:19:04.137346 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3433831-e142-4ec1-b8ce-dc1d064c3ffa" path="/var/lib/kubelet/pods/a3433831-e142-4ec1-b8ce-dc1d064c3ffa/volumes" Feb 03 06:19:04 crc kubenswrapper[4872]: I0203 06:19:04.138087 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3554454-7e8b-4f41-ba97-75595a3f10aa" path="/var/lib/kubelet/pods/e3554454-7e8b-4f41-ba97-75595a3f10aa/volumes" Feb 03 06:19:04 crc kubenswrapper[4872]: I0203 06:19:04.290873 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m54jj" event={"ID":"9d670aec-b637-4fe6-b046-794d9628b49b","Type":"ContainerStarted","Data":"2a723d3a6a85c2b1ba0422b2b817e2490cb46df9fc9d206d89e541f7dabc42ba"} Feb 03 06:19:04 crc kubenswrapper[4872]: I0203 06:19:04.301102 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c71f7574-20a8-448b-bb5e-af62f7311a08","Type":"ContainerStarted","Data":"0a1af5817a9684688536c45699f8b8607c6b14476a652fbdae59929456738a45"} Feb 03 06:19:04 crc kubenswrapper[4872]: I0203 06:19:04.343752 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-m54jj" podStartSLOduration=5.179858936 podStartE2EDuration="53.343729651s" podCreationTimestamp="2026-02-03 06:18:11 +0000 UTC" firstStartedPulling="2026-02-03 06:18:14.525030723 +0000 UTC m=+1065.107722137" lastFinishedPulling="2026-02-03 06:19:02.688901398 +0000 UTC m=+1113.271592852" observedRunningTime="2026-02-03 06:19:04.3192173 +0000 UTC m=+1114.901908714" watchObservedRunningTime="2026-02-03 06:19:04.343729651 +0000 UTC m=+1114.926421065" Feb 03 06:19:04 crc kubenswrapper[4872]: I0203 06:19:04.559493 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:19:05 crc kubenswrapper[4872]: I0203 06:19:05.375664 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4858d6ef-a1ea-4b48-ad9c-0a56860e802b","Type":"ContainerStarted","Data":"6077b9f6ee10b2cbb56411c44ac1dd7be88243893004fbe6b56d6e7d358d5049"} Feb 03 06:19:05 crc kubenswrapper[4872]: I0203 06:19:05.390247 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c71f7574-20a8-448b-bb5e-af62f7311a08","Type":"ContainerStarted","Data":"6ad87165d548c13c101ca2d9191279df4398ffd900336678a80732ca52b40462"} Feb 03 06:19:06 crc kubenswrapper[4872]: I0203 06:19:06.414652 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4858d6ef-a1ea-4b48-ad9c-0a56860e802b","Type":"ContainerStarted","Data":"af3d526bad9ca30006bcc071fd6488403ee84c207b299a9450b2f59cdf5e105c"} Feb 03 06:19:07 crc kubenswrapper[4872]: I0203 06:19:07.435252 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c71f7574-20a8-448b-bb5e-af62f7311a08","Type":"ContainerStarted","Data":"6be78455df4cde88131ca45af76207cd1e7de9daa4a8ccf672e63fc2b023731a"} Feb 03 06:19:07 crc kubenswrapper[4872]: I0203 06:19:07.439888 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4858d6ef-a1ea-4b48-ad9c-0a56860e802b","Type":"ContainerStarted","Data":"a1bf2e37456770e9e654d093234729ded06287d191d8b32b2ee6b68adf601d1d"} Feb 03 06:19:07 crc kubenswrapper[4872]: I0203 06:19:07.457355 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.457329901 podStartE2EDuration="5.457329901s" podCreationTimestamp="2026-02-03 06:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:07.453400966 +0000 UTC m=+1118.036092390" watchObservedRunningTime="2026-02-03 06:19:07.457329901 +0000 UTC m=+1118.040021315" Feb 03 06:19:07 crc kubenswrapper[4872]: I0203 06:19:07.496787 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.496765422 podStartE2EDuration="4.496765422s" podCreationTimestamp="2026-02-03 06:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:07.491133016 +0000 UTC m=+1118.073824430" watchObservedRunningTime="2026-02-03 06:19:07.496765422 +0000 UTC m=+1118.079456836" Feb 03 06:19:10 crc kubenswrapper[4872]: I0203 06:19:10.721898 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 06:19:10 crc kubenswrapper[4872]: I0203 06:19:10.887983 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57dc94599b-bvf7j" podUID="f475ab66-31e6-46da-ad2e-8e8279e33b68" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 06:19:11 crc kubenswrapper[4872]: I0203 06:19:11.476146 4872 generic.go:334] "Generic (PLEG): container finished" podID="13e3e69a-6af1-439d-a9e4-2295a6206492" containerID="5bf51ac0ecb2ab6a9284994742efdbe55ca56dd83ca85291b53d4d15f1e8268b" exitCode=0 Feb 03 06:19:11 crc kubenswrapper[4872]: I0203 06:19:11.476201 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dbkgp" event={"ID":"13e3e69a-6af1-439d-a9e4-2295a6206492","Type":"ContainerDied","Data":"5bf51ac0ecb2ab6a9284994742efdbe55ca56dd83ca85291b53d4d15f1e8268b"} Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.149888 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.150349 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.405642 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.406505 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.507098 4872 generic.go:334] "Generic (PLEG): container finished" podID="48007ee1-953a-42c7-9279-2f348eb7bffb" containerID="7a05757675f2b606a0f56fa7b9c83a0b39f270b9553586945b17b42243f2d508" exitCode=0 Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.509217 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cr92l" event={"ID":"48007ee1-953a-42c7-9279-2f348eb7bffb","Type":"ContainerDied","Data":"7a05757675f2b606a0f56fa7b9c83a0b39f270b9553586945b17b42243f2d508"} Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.509255 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.509723 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.790666 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.790769 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.849066 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:13 crc kubenswrapper[4872]: I0203 06:19:13.890367 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:14 crc kubenswrapper[4872]: I0203 06:19:14.516785 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:14 crc kubenswrapper[4872]: I0203 06:19:14.517089 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:15 crc kubenswrapper[4872]: I0203 06:19:15.524815 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:19:15 crc kubenswrapper[4872]: I0203 06:19:15.524835 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.383723 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dbkgp" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.466312 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q47wx\" (UniqueName: \"kubernetes.io/projected/13e3e69a-6af1-439d-a9e4-2295a6206492-kube-api-access-q47wx\") pod \"13e3e69a-6af1-439d-a9e4-2295a6206492\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.466398 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-config-data\") pod \"13e3e69a-6af1-439d-a9e4-2295a6206492\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.466476 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-scripts\") pod \"13e3e69a-6af1-439d-a9e4-2295a6206492\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.466544 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e3e69a-6af1-439d-a9e4-2295a6206492-logs\") pod \"13e3e69a-6af1-439d-a9e4-2295a6206492\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.483199 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-combined-ca-bundle\") pod \"13e3e69a-6af1-439d-a9e4-2295a6206492\" (UID: \"13e3e69a-6af1-439d-a9e4-2295a6206492\") " Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.485085 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13e3e69a-6af1-439d-a9e4-2295a6206492-logs" (OuterVolumeSpecName: "logs") pod "13e3e69a-6af1-439d-a9e4-2295a6206492" (UID: "13e3e69a-6af1-439d-a9e4-2295a6206492"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.519090 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e3e69a-6af1-439d-a9e4-2295a6206492-kube-api-access-q47wx" (OuterVolumeSpecName: "kube-api-access-q47wx") pod "13e3e69a-6af1-439d-a9e4-2295a6206492" (UID: "13e3e69a-6af1-439d-a9e4-2295a6206492"). InnerVolumeSpecName "kube-api-access-q47wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.522654 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-scripts" (OuterVolumeSpecName: "scripts") pod "13e3e69a-6af1-439d-a9e4-2295a6206492" (UID: "13e3e69a-6af1-439d-a9e4-2295a6206492"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.534740 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-config-data" (OuterVolumeSpecName: "config-data") pod "13e3e69a-6af1-439d-a9e4-2295a6206492" (UID: "13e3e69a-6af1-439d-a9e4-2295a6206492"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.558227 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dbkgp" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.558882 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dbkgp" event={"ID":"13e3e69a-6af1-439d-a9e4-2295a6206492","Type":"ContainerDied","Data":"7047e739ad5fc01d01f7d4e36bfa19297ac1088d7cadd29a219243d558c9a3e0"} Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.558914 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7047e739ad5fc01d01f7d4e36bfa19297ac1088d7cadd29a219243d558c9a3e0" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.573365 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13e3e69a-6af1-439d-a9e4-2295a6206492" (UID: "13e3e69a-6af1-439d-a9e4-2295a6206492"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.585949 4872 generic.go:334] "Generic (PLEG): container finished" podID="9ca50ee1-d592-41c3-869f-480e7d3d02f8" containerID="c83d2ff3724bd518c1a1b53e81f3ee27c447946dfae3855a6c7384764017fa72" exitCode=0 Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.586058 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.586071 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.586853 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-95dn6" event={"ID":"9ca50ee1-d592-41c3-869f-480e7d3d02f8","Type":"ContainerDied","Data":"c83d2ff3724bd518c1a1b53e81f3ee27c447946dfae3855a6c7384764017fa72"} Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.590311 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.590345 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q47wx\" (UniqueName: \"kubernetes.io/projected/13e3e69a-6af1-439d-a9e4-2295a6206492-kube-api-access-q47wx\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.590361 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.590370 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e3e69a-6af1-439d-a9e4-2295a6206492-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:16 crc kubenswrapper[4872]: I0203 06:19:16.590381 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e3e69a-6af1-439d-a9e4-2295a6206492-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.589239 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cr92l" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.600843 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cr92l" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.600992 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cr92l" event={"ID":"48007ee1-953a-42c7-9279-2f348eb7bffb","Type":"ContainerDied","Data":"db0fe47f259262a173880cb54c848d3c93157a7667a78114b5ef7c34b52dcd61"} Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.601369 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db0fe47f259262a173880cb54c848d3c93157a7667a78114b5ef7c34b52dcd61" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.659748 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f7ccbfc56-8bmzq"] Feb 03 06:19:17 crc kubenswrapper[4872]: E0203 06:19:17.660466 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e3e69a-6af1-439d-a9e4-2295a6206492" containerName="placement-db-sync" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.660479 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e3e69a-6af1-439d-a9e4-2295a6206492" containerName="placement-db-sync" Feb 03 06:19:17 crc kubenswrapper[4872]: E0203 06:19:17.660512 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48007ee1-953a-42c7-9279-2f348eb7bffb" containerName="barbican-db-sync" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.660521 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="48007ee1-953a-42c7-9279-2f348eb7bffb" containerName="barbican-db-sync" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.660757 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e3e69a-6af1-439d-a9e4-2295a6206492" containerName="placement-db-sync" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.660795 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="48007ee1-953a-42c7-9279-2f348eb7bffb" containerName="barbican-db-sync" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.662782 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.667069 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.667216 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.667418 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.667587 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.667714 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mtt5j" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.670476 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f7ccbfc56-8bmzq"] Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.710779 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zp8j\" (UniqueName: \"kubernetes.io/projected/48007ee1-953a-42c7-9279-2f348eb7bffb-kube-api-access-9zp8j\") pod \"48007ee1-953a-42c7-9279-2f348eb7bffb\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.710847 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-combined-ca-bundle\") pod \"48007ee1-953a-42c7-9279-2f348eb7bffb\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.710953 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-db-sync-config-data\") pod \"48007ee1-953a-42c7-9279-2f348eb7bffb\" (UID: \"48007ee1-953a-42c7-9279-2f348eb7bffb\") " Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.711113 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-public-tls-certs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.711138 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-combined-ca-bundle\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.711158 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90a09160-e06d-497b-bf29-781a4009c899-logs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.711173 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdz7h\" (UniqueName: \"kubernetes.io/projected/90a09160-e06d-497b-bf29-781a4009c899-kube-api-access-xdz7h\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.711224 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-scripts\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.711252 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-config-data\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.711298 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-internal-tls-certs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.737438 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "48007ee1-953a-42c7-9279-2f348eb7bffb" (UID: "48007ee1-953a-42c7-9279-2f348eb7bffb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.737551 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48007ee1-953a-42c7-9279-2f348eb7bffb-kube-api-access-9zp8j" (OuterVolumeSpecName: "kube-api-access-9zp8j") pod "48007ee1-953a-42c7-9279-2f348eb7bffb" (UID: "48007ee1-953a-42c7-9279-2f348eb7bffb"). InnerVolumeSpecName "kube-api-access-9zp8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.799353 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48007ee1-953a-42c7-9279-2f348eb7bffb" (UID: "48007ee1-953a-42c7-9279-2f348eb7bffb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812585 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-internal-tls-certs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812641 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-public-tls-certs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812663 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-combined-ca-bundle\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812686 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90a09160-e06d-497b-bf29-781a4009c899-logs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812771 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdz7h\" (UniqueName: \"kubernetes.io/projected/90a09160-e06d-497b-bf29-781a4009c899-kube-api-access-xdz7h\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812836 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-scripts\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812866 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-config-data\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812922 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812933 4872 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48007ee1-953a-42c7-9279-2f348eb7bffb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.812942 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zp8j\" (UniqueName: \"kubernetes.io/projected/48007ee1-953a-42c7-9279-2f348eb7bffb-kube-api-access-9zp8j\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.813329 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90a09160-e06d-497b-bf29-781a4009c899-logs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.821127 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-scripts\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.837162 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-config-data\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.837743 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-internal-tls-certs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.838501 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-combined-ca-bundle\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.839167 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdz7h\" (UniqueName: \"kubernetes.io/projected/90a09160-e06d-497b-bf29-781a4009c899-kube-api-access-xdz7h\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:17 crc kubenswrapper[4872]: I0203 06:19:17.843224 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a09160-e06d-497b-bf29-781a4009c899-public-tls-certs\") pod \"placement-5f7ccbfc56-8bmzq\" (UID: \"90a09160-e06d-497b-bf29-781a4009c899\") " pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:18 crc kubenswrapper[4872]: E0203 06:19:18.022597 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.049032 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-95dn6" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.108269 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.121619 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-config\") pod \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.121757 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qc2c\" (UniqueName: \"kubernetes.io/projected/9ca50ee1-d592-41c3-869f-480e7d3d02f8-kube-api-access-5qc2c\") pod \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.121922 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-combined-ca-bundle\") pod \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\" (UID: \"9ca50ee1-d592-41c3-869f-480e7d3d02f8\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.146874 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca50ee1-d592-41c3-869f-480e7d3d02f8-kube-api-access-5qc2c" (OuterVolumeSpecName: "kube-api-access-5qc2c") pod "9ca50ee1-d592-41c3-869f-480e7d3d02f8" (UID: "9ca50ee1-d592-41c3-869f-480e7d3d02f8"). InnerVolumeSpecName "kube-api-access-5qc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.174576 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ca50ee1-d592-41c3-869f-480e7d3d02f8" (UID: "9ca50ee1-d592-41c3-869f-480e7d3d02f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.218736 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-config" (OuterVolumeSpecName: "config") pod "9ca50ee1-d592-41c3-869f-480e7d3d02f8" (UID: "9ca50ee1-d592-41c3-869f-480e7d3d02f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.224070 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.224112 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ca50ee1-d592-41c3-869f-480e7d3d02f8-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.224123 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qc2c\" (UniqueName: \"kubernetes.io/projected/9ca50ee1-d592-41c3-869f-480e7d3d02f8-kube-api-access-5qc2c\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.429946 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.530938 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-horizon-secret-key\") pod \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.531046 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-logs\") pod \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.531088 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndmz9\" (UniqueName: \"kubernetes.io/projected/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-kube-api-access-ndmz9\") pod \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.531273 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-scripts\") pod \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.531370 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-config-data\") pod \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\" (UID: \"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.532615 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-logs" (OuterVolumeSpecName: "logs") pod "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" (UID: "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.533539 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.543405 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-kube-api-access-ndmz9" (OuterVolumeSpecName: "kube-api-access-ndmz9") pod "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" (UID: "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00"). InnerVolumeSpecName "kube-api-access-ndmz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.558202 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.586264 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-config-data" (OuterVolumeSpecName: "config-data") pod "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" (UID: "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.610157 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" (UID: "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.641182 4872 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.641222 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndmz9\" (UniqueName: \"kubernetes.io/projected/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-kube-api-access-ndmz9\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.641233 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.645896 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-scripts" (OuterVolumeSpecName: "scripts") pod "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" (UID: "6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.662519 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2accf2e-9270-4b7b-ac0e-7062b53c7cda","Type":"ContainerStarted","Data":"05e6488026929d8ac2e75c24326a2a0675050a69f08ecfc11affa945d6636a72"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.662678 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerName="sg-core" containerID="cri-o://0577a59e8d68959f7192529bba8f9c5a74fb63bc21b9264f51791cbddd61af8a" gracePeriod=30 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.662768 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.663060 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerName="proxy-httpd" containerID="cri-o://05e6488026929d8ac2e75c24326a2a0675050a69f08ecfc11affa945d6636a72" gracePeriod=30 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.679061 4872 generic.go:334] "Generic (PLEG): container finished" podID="9d670aec-b637-4fe6-b046-794d9628b49b" containerID="2a723d3a6a85c2b1ba0422b2b817e2490cb46df9fc9d206d89e541f7dabc42ba" exitCode=0 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.679151 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m54jj" event={"ID":"9d670aec-b637-4fe6-b046-794d9628b49b","Type":"ContainerDied","Data":"2a723d3a6a85c2b1ba0422b2b817e2490cb46df9fc9d206d89e541f7dabc42ba"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.711182 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-95dn6" event={"ID":"9ca50ee1-d592-41c3-869f-480e7d3d02f8","Type":"ContainerDied","Data":"e28a74e67c9efcb5fe027ec1062febae51d6015d3a96e19dcc2e250bb3e28067"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.711246 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e28a74e67c9efcb5fe027ec1062febae51d6015d3a96e19dcc2e250bb3e28067" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.711324 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-95dn6" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.750910 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzcqv\" (UniqueName: \"kubernetes.io/projected/1e832ac8-6556-46fd-88a6-b2ebc386cc14-kube-api-access-bzcqv\") pod \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.751240 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-config-data\") pod \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.751359 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e832ac8-6556-46fd-88a6-b2ebc386cc14-logs\") pod \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.751410 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-scripts\") pod \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.751434 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e832ac8-6556-46fd-88a6-b2ebc386cc14-horizon-secret-key\") pod \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\" (UID: \"1e832ac8-6556-46fd-88a6-b2ebc386cc14\") " Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.762844 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.770249 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e832ac8-6556-46fd-88a6-b2ebc386cc14-logs" (OuterVolumeSpecName: "logs") pod "1e832ac8-6556-46fd-88a6-b2ebc386cc14" (UID: "1e832ac8-6556-46fd-88a6-b2ebc386cc14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.784440 4872 generic.go:334] "Generic (PLEG): container finished" podID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerID="2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8" exitCode=137 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.784472 4872 generic.go:334] "Generic (PLEG): container finished" podID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerID="1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a" exitCode=137 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.784963 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5c58cfc-sgggw" event={"ID":"1e832ac8-6556-46fd-88a6-b2ebc386cc14","Type":"ContainerDied","Data":"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.785009 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5c58cfc-sgggw" event={"ID":"1e832ac8-6556-46fd-88a6-b2ebc386cc14","Type":"ContainerDied","Data":"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.785021 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f5c58cfc-sgggw" event={"ID":"1e832ac8-6556-46fd-88a6-b2ebc386cc14","Type":"ContainerDied","Data":"cf291d867b89bd9d88bfdf226a7812a96324104feaab0239c9e9292e534558a5"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.785253 4872 scope.go:117] "RemoveContainer" containerID="2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.797674 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e832ac8-6556-46fd-88a6-b2ebc386cc14-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1e832ac8-6556-46fd-88a6-b2ebc386cc14" (UID: "1e832ac8-6556-46fd-88a6-b2ebc386cc14"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.818248 4872 generic.go:334] "Generic (PLEG): container finished" podID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerID="2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18" exitCode=137 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.818299 4872 generic.go:334] "Generic (PLEG): container finished" podID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerID="2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5" exitCode=137 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.818474 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf954ffc-8mhd9" event={"ID":"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00","Type":"ContainerDied","Data":"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.818495 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf954ffc-8mhd9" event={"ID":"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00","Type":"ContainerDied","Data":"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.818505 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf954ffc-8mhd9" event={"ID":"6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00","Type":"ContainerDied","Data":"528cdcba443b5dd612f21d1edc6d3023e16e3aa76fbf5750e7dd648e0aeef80e"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.819603 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf954ffc-8mhd9" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.821962 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f5c58cfc-sgggw" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.862426 4872 generic.go:334] "Generic (PLEG): container finished" podID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerID="c4064763511a1e35c5acf78b13964a62a35f00d469db86649c646c0c9ad31477" exitCode=137 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.862462 4872 generic.go:334] "Generic (PLEG): container finished" podID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerID="f3678e68db7c1dbcfdf96c5b4c7d8d4da1a11538f6f43d1bc7a37114cee8dc40" exitCode=137 Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.862485 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdcd6ddbf-zq5br" event={"ID":"585fab82-3ee7-4833-9e6a-63d58e40867b","Type":"ContainerDied","Data":"c4064763511a1e35c5acf78b13964a62a35f00d469db86649c646c0c9ad31477"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.862510 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdcd6ddbf-zq5br" event={"ID":"585fab82-3ee7-4833-9e6a-63d58e40867b","Type":"ContainerDied","Data":"f3678e68db7c1dbcfdf96c5b4c7d8d4da1a11538f6f43d1bc7a37114cee8dc40"} Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.906864 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e832ac8-6556-46fd-88a6-b2ebc386cc14-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.919295 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-config-data" (OuterVolumeSpecName: "config-data") pod "1e832ac8-6556-46fd-88a6-b2ebc386cc14" (UID: "1e832ac8-6556-46fd-88a6-b2ebc386cc14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.929037 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-scripts" (OuterVolumeSpecName: "scripts") pod "1e832ac8-6556-46fd-88a6-b2ebc386cc14" (UID: "1e832ac8-6556-46fd-88a6-b2ebc386cc14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.952792 4872 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1e832ac8-6556-46fd-88a6-b2ebc386cc14-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:18 crc kubenswrapper[4872]: I0203 06:19:18.958933 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e832ac8-6556-46fd-88a6-b2ebc386cc14-kube-api-access-bzcqv" (OuterVolumeSpecName: "kube-api-access-bzcqv") pod "1e832ac8-6556-46fd-88a6-b2ebc386cc14" (UID: "1e832ac8-6556-46fd-88a6-b2ebc386cc14"). InnerVolumeSpecName "kube-api-access-bzcqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.058864 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.058899 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzcqv\" (UniqueName: \"kubernetes.io/projected/1e832ac8-6556-46fd-88a6-b2ebc386cc14-kube-api-access-bzcqv\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.058909 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e832ac8-6556-46fd-88a6-b2ebc386cc14-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.094252 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f7ccbfc56-8bmzq"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.102854 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-266wz"] Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.110448 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca50ee1-d592-41c3-869f-480e7d3d02f8" containerName="neutron-db-sync" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.110544 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca50ee1-d592-41c3-869f-480e7d3d02f8" containerName="neutron-db-sync" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.110658 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.110744 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.110830 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.110905 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.110989 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.111063 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.111148 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.111230 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.111573 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.111707 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.111828 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.111931 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca50ee1-d592-41c3-869f-480e7d3d02f8" containerName="neutron-db-sync" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.112020 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.114131 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.130869 4872 scope.go:117] "RemoveContainer" containerID="1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.148773 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-266wz"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.166269 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf954ffc-8mhd9"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.190944 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bf954ffc-8mhd9"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.208342 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-dd8d7b8db-mtx4r"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.209631 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.215672 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.215917 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.216059 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vxf4x" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.234859 4872 scope.go:117] "RemoveContainer" containerID="2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.236755 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6c77557787-rb2tb"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.238126 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.239944 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.240167 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8\": container with ID starting with 2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8 not found: ID does not exist" containerID="2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.240202 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8"} err="failed to get container status \"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8\": rpc error: code = NotFound desc = could not find container \"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8\": container with ID starting with 2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8 not found: ID does not exist" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.240231 4872 scope.go:117] "RemoveContainer" containerID="1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.243426 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a\": container with ID starting with 1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a not found: ID does not exist" containerID="1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.243461 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a"} err="failed to get container status \"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a\": rpc error: code = NotFound desc = could not find container \"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a\": container with ID starting with 1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a not found: ID does not exist" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.243486 4872 scope.go:117] "RemoveContainer" containerID="2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.248226 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8"} err="failed to get container status \"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8\": rpc error: code = NotFound desc = could not find container \"2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8\": container with ID starting with 2e128ea9141aed26521eaa43e5555dab3bef96f55ea966837a766bdf6e6e1cf8 not found: ID does not exist" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.248263 4872 scope.go:117] "RemoveContainer" containerID="1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.249136 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a"} err="failed to get container status \"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a\": rpc error: code = NotFound desc = could not find container \"1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a\": container with ID starting with 1929e94070309ee501c50a724e67c6f052db6af3b86410760de64dee8f4bf09a not found: ID does not exist" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.249181 4872 scope.go:117] "RemoveContainer" containerID="2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.264383 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-config\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.264421 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-svc\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.264493 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.264521 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clzl2\" (UniqueName: \"kubernetes.io/projected/a846e1af-6b9c-4ef1-83dc-7fde36f52695-kube-api-access-clzl2\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.264588 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.265091 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.274146 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-dd8d7b8db-mtx4r"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.304020 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c77557787-rb2tb"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.372728 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-config-data-custom\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.372804 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.372827 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-logs\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.372865 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clzl2\" (UniqueName: \"kubernetes.io/projected/a846e1af-6b9c-4ef1-83dc-7fde36f52695-kube-api-access-clzl2\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.372903 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9d16c1-a901-456f-a48d-a56879b49c8d-logs\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.372960 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qjj9\" (UniqueName: \"kubernetes.io/projected/db9d16c1-a901-456f-a48d-a56879b49c8d-kube-api-access-5qjj9\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.372980 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt8dk\" (UniqueName: \"kubernetes.io/projected/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-kube-api-access-nt8dk\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373014 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-combined-ca-bundle\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373040 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373063 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373115 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-config-data\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373162 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-config\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373185 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-config-data-custom\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373203 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-svc\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373223 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-combined-ca-bundle\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.373254 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-config-data\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.374964 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.383208 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.388594 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-svc\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.389120 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.389483 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-config\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.401757 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-266wz"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.436308 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cmmdw"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.438188 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.439789 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-clzl2], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6b7b667979-266wz" podUID="a846e1af-6b9c-4ef1-83dc-7fde36f52695" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.446471 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.460841 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clzl2\" (UniqueName: \"kubernetes.io/projected/a846e1af-6b9c-4ef1-83dc-7fde36f52695-kube-api-access-clzl2\") pod \"dnsmasq-dns-6b7b667979-266wz\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.476037 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-config-data\") pod \"585fab82-3ee7-4833-9e6a-63d58e40867b\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.478609 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cmmdw"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479518 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585fab82-3ee7-4833-9e6a-63d58e40867b-logs\") pod \"585fab82-3ee7-4833-9e6a-63d58e40867b\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479548 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-scripts\") pod \"585fab82-3ee7-4833-9e6a-63d58e40867b\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479565 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xrg5\" (UniqueName: \"kubernetes.io/projected/585fab82-3ee7-4833-9e6a-63d58e40867b-kube-api-access-7xrg5\") pod \"585fab82-3ee7-4833-9e6a-63d58e40867b\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479606 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/585fab82-3ee7-4833-9e6a-63d58e40867b-horizon-secret-key\") pod \"585fab82-3ee7-4833-9e6a-63d58e40867b\" (UID: \"585fab82-3ee7-4833-9e6a-63d58e40867b\") " Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479861 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-config\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479887 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479939 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-config-data\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479973 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt9km\" (UniqueName: \"kubernetes.io/projected/57418ad5-e380-4bcf-98a2-622035e88ec4-kube-api-access-vt9km\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.479990 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.480800 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-config-data-custom\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.480857 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-combined-ca-bundle\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.480874 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-config-data\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.480906 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.480974 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-config-data-custom\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.481056 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-logs\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.481185 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9d16c1-a901-456f-a48d-a56879b49c8d-logs\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.481223 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qjj9\" (UniqueName: \"kubernetes.io/projected/db9d16c1-a901-456f-a48d-a56879b49c8d-kube-api-access-5qjj9\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.481260 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt8dk\" (UniqueName: \"kubernetes.io/projected/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-kube-api-access-nt8dk\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.481287 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-combined-ca-bundle\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.481402 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.480676 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585fab82-3ee7-4833-9e6a-63d58e40867b-logs" (OuterVolumeSpecName: "logs") pod "585fab82-3ee7-4833-9e6a-63d58e40867b" (UID: "585fab82-3ee7-4833-9e6a-63d58e40867b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.483434 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-logs\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.484212 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db9d16c1-a901-456f-a48d-a56879b49c8d-logs\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.534385 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-config-data-custom\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.535052 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585fab82-3ee7-4833-9e6a-63d58e40867b-kube-api-access-7xrg5" (OuterVolumeSpecName: "kube-api-access-7xrg5") pod "585fab82-3ee7-4833-9e6a-63d58e40867b" (UID: "585fab82-3ee7-4833-9e6a-63d58e40867b"). InnerVolumeSpecName "kube-api-access-7xrg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.536344 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-config-data\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.536952 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-config-data-custom\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.539355 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/585fab82-3ee7-4833-9e6a-63d58e40867b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "585fab82-3ee7-4833-9e6a-63d58e40867b" (UID: "585fab82-3ee7-4833-9e6a-63d58e40867b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.543306 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-combined-ca-bundle\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.546830 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69f5c58cfc-sgggw"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.569686 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-config-data\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.584929 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qjj9\" (UniqueName: \"kubernetes.io/projected/db9d16c1-a901-456f-a48d-a56879b49c8d-kube-api-access-5qjj9\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.585644 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.585827 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.585870 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-config\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.585890 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.585954 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt9km\" (UniqueName: \"kubernetes.io/projected/57418ad5-e380-4bcf-98a2-622035e88ec4-kube-api-access-vt9km\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.585976 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.586050 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/585fab82-3ee7-4833-9e6a-63d58e40867b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.586065 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xrg5\" (UniqueName: \"kubernetes.io/projected/585fab82-3ee7-4833-9e6a-63d58e40867b-kube-api-access-7xrg5\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.586094 4872 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/585fab82-3ee7-4833-9e6a-63d58e40867b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.587122 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.592926 4872 scope.go:117] "RemoveContainer" containerID="2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.609363 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db9d16c1-a901-456f-a48d-a56879b49c8d-combined-ca-bundle\") pod \"barbican-keystone-listener-dd8d7b8db-mtx4r\" (UID: \"db9d16c1-a901-456f-a48d-a56879b49c8d\") " pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.611373 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69f5c58cfc-sgggw"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.613136 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.618458 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.618463 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.619728 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-config-data" (OuterVolumeSpecName: "config-data") pod "585fab82-3ee7-4833-9e6a-63d58e40867b" (UID: "585fab82-3ee7-4833-9e6a-63d58e40867b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.635406 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-config\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.663606 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt8dk\" (UniqueName: \"kubernetes.io/projected/8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5-kube-api-access-nt8dk\") pod \"barbican-worker-6c77557787-rb2tb\" (UID: \"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5\") " pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.676671 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt9km\" (UniqueName: \"kubernetes.io/projected/57418ad5-e380-4bcf-98a2-622035e88ec4-kube-api-access-vt9km\") pod \"dnsmasq-dns-848cf88cfc-cmmdw\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.692063 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.723806 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c77557787-rb2tb" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.743397 4872 scope.go:117] "RemoveContainer" containerID="2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.743654 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-scripts" (OuterVolumeSpecName: "scripts") pod "585fab82-3ee7-4833-9e6a-63d58e40867b" (UID: "585fab82-3ee7-4833-9e6a-63d58e40867b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.758136 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18\": container with ID starting with 2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18 not found: ID does not exist" containerID="2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.758180 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18"} err="failed to get container status \"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18\": rpc error: code = NotFound desc = could not find container \"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18\": container with ID starting with 2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18 not found: ID does not exist" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.758204 4872 scope.go:117] "RemoveContainer" containerID="2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.760722 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.761075 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5\": container with ID starting with 2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5 not found: ID does not exist" containerID="2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.761098 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5"} err="failed to get container status \"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5\": rpc error: code = NotFound desc = could not find container \"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5\": container with ID starting with 2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5 not found: ID does not exist" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.761113 4872 scope.go:117] "RemoveContainer" containerID="2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.780592 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.781035 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18"} err="failed to get container status \"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18\": rpc error: code = NotFound desc = could not find container \"2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18\": container with ID starting with 2c524c04337efa64de9f0ef072ca460743ba1f9eeda636608c6e40e3b3d4ff18 not found: ID does not exist" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.781063 4872 scope.go:117] "RemoveContainer" containerID="2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.784312 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b5585877d-9z5lp"] Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.784619 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.784633 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: E0203 06:19:19.784652 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.784658 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.784808 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerName="horizon" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.784829 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" containerName="horizon-log" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.785573 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.796541 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/585fab82-3ee7-4833-9e6a-63d58e40867b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.810910 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5"} err="failed to get container status \"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5\": rpc error: code = NotFound desc = could not find container \"2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5\": container with ID starting with 2c9305ee2c251566e46c721f4326af96782dbaf4be07b37e1b0b7303f569bce5 not found: ID does not exist" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.811204 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.866643 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fb67d557b-lcx8h"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.868487 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.877017 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xcn7f" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.888353 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.888493 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.889413 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.907765 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b5585877d-9z5lp"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908651 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-combined-ca-bundle\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908700 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-ovndb-tls-certs\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908724 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908783 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7sp\" (UniqueName: \"kubernetes.io/projected/683c6fc3-d9ec-4e50-9c36-22a929897924-kube-api-access-jt7sp\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908835 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx6n5\" (UniqueName: \"kubernetes.io/projected/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-kube-api-access-xx6n5\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908852 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-httpd-config\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908882 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-config\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908930 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-combined-ca-bundle\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908951 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/683c6fc3-d9ec-4e50-9c36-22a929897924-logs\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.908983 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data-custom\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.924779 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fb67d557b-lcx8h"] Feb 03 06:19:19 crc kubenswrapper[4872]: I0203 06:19:19.941921 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7ccbfc56-8bmzq" event={"ID":"90a09160-e06d-497b-bf29-781a4009c899","Type":"ContainerStarted","Data":"8d52745424c8d80036c8e99adfe6382bdcf36add33fdecdd1f6d8206db6c762c"} Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.005203 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdcd6ddbf-zq5br" event={"ID":"585fab82-3ee7-4833-9e6a-63d58e40867b","Type":"ContainerDied","Data":"7ad08ce3954e6f77e6482b13a5c57f3d0aa2d25132cac9fcb230094ede4a6dd0"} Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.005252 4872 scope.go:117] "RemoveContainer" containerID="c4064763511a1e35c5acf78b13964a62a35f00d469db86649c646c0c9ad31477" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.005369 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bdcd6ddbf-zq5br" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014154 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-combined-ca-bundle\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014189 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/683c6fc3-d9ec-4e50-9c36-22a929897924-logs\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014211 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data-custom\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014240 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-combined-ca-bundle\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014260 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-ovndb-tls-certs\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014282 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014341 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7sp\" (UniqueName: \"kubernetes.io/projected/683c6fc3-d9ec-4e50-9c36-22a929897924-kube-api-access-jt7sp\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014366 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx6n5\" (UniqueName: \"kubernetes.io/projected/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-kube-api-access-xx6n5\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014382 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-httpd-config\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.014411 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-config\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.030023 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-ovndb-tls-certs\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.031565 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/683c6fc3-d9ec-4e50-9c36-22a929897924-logs\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.043560 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-config\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.045339 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.052283 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-combined-ca-bundle\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.052455 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data-custom\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.053091 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-combined-ca-bundle\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.053557 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-httpd-config\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.054874 4872 generic.go:334] "Generic (PLEG): container finished" podID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerID="05e6488026929d8ac2e75c24326a2a0675050a69f08ecfc11affa945d6636a72" exitCode=0 Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.054903 4872 generic.go:334] "Generic (PLEG): container finished" podID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerID="0577a59e8d68959f7192529bba8f9c5a74fb63bc21b9264f51791cbddd61af8a" exitCode=2 Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.054973 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2accf2e-9270-4b7b-ac0e-7062b53c7cda","Type":"ContainerDied","Data":"05e6488026929d8ac2e75c24326a2a0675050a69f08ecfc11affa945d6636a72"} Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.054996 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2accf2e-9270-4b7b-ac0e-7062b53c7cda","Type":"ContainerDied","Data":"0577a59e8d68959f7192529bba8f9c5a74fb63bc21b9264f51791cbddd61af8a"} Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.090156 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7sp\" (UniqueName: \"kubernetes.io/projected/683c6fc3-d9ec-4e50-9c36-22a929897924-kube-api-access-jt7sp\") pod \"barbican-api-b5585877d-9z5lp\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.090375 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.106677 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx6n5\" (UniqueName: \"kubernetes.io/projected/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-kube-api-access-xx6n5\") pod \"neutron-6fb67d557b-lcx8h\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.169896 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.217310 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.258840 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.260129 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e832ac8-6556-46fd-88a6-b2ebc386cc14" path="/var/lib/kubelet/pods/1e832ac8-6556-46fd-88a6-b2ebc386cc14/volumes" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.303780 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00" path="/var/lib/kubelet/pods/6173b3f4-ad7a-47fa-88ff-57ba1fc4fb00/volumes" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.306227 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bdcd6ddbf-zq5br"] Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.335947 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-nb\") pod \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.340185 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-sb\") pod \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.340320 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clzl2\" (UniqueName: \"kubernetes.io/projected/a846e1af-6b9c-4ef1-83dc-7fde36f52695-kube-api-access-clzl2\") pod \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.340408 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-config\") pod \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.340553 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-swift-storage-0\") pod \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.340614 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-svc\") pod \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\" (UID: \"a846e1af-6b9c-4ef1-83dc-7fde36f52695\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.348440 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a846e1af-6b9c-4ef1-83dc-7fde36f52695" (UID: "a846e1af-6b9c-4ef1-83dc-7fde36f52695"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.351837 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a846e1af-6b9c-4ef1-83dc-7fde36f52695" (UID: "a846e1af-6b9c-4ef1-83dc-7fde36f52695"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.351866 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a846e1af-6b9c-4ef1-83dc-7fde36f52695" (UID: "a846e1af-6b9c-4ef1-83dc-7fde36f52695"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.352387 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-config" (OuterVolumeSpecName: "config") pod "a846e1af-6b9c-4ef1-83dc-7fde36f52695" (UID: "a846e1af-6b9c-4ef1-83dc-7fde36f52695"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.353978 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a846e1af-6b9c-4ef1-83dc-7fde36f52695" (UID: "a846e1af-6b9c-4ef1-83dc-7fde36f52695"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.395470 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a846e1af-6b9c-4ef1-83dc-7fde36f52695-kube-api-access-clzl2" (OuterVolumeSpecName: "kube-api-access-clzl2") pod "a846e1af-6b9c-4ef1-83dc-7fde36f52695" (UID: "a846e1af-6b9c-4ef1-83dc-7fde36f52695"). InnerVolumeSpecName "kube-api-access-clzl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.396745 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bdcd6ddbf-zq5br"] Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.444061 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clzl2\" (UniqueName: \"kubernetes.io/projected/a846e1af-6b9c-4ef1-83dc-7fde36f52695-kube-api-access-clzl2\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.444089 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.444100 4872 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.444108 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.444117 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.444139 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a846e1af-6b9c-4ef1-83dc-7fde36f52695-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.604089 4872 scope.go:117] "RemoveContainer" containerID="f3678e68db7c1dbcfdf96c5b4c7d8d4da1a11538f6f43d1bc7a37114cee8dc40" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.735840 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.735912 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.736646 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"534ee65dd0168b081759c8c62509c1a653cf9038605da25240365de5b1fdcb4c"} pod="openstack/horizon-6b48d58c48-rvvcb" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.736930 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" containerID="cri-o://534ee65dd0168b081759c8c62509c1a653cf9038605da25240365de5b1fdcb4c" gracePeriod=30 Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.775744 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.859282 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-combined-ca-bundle\") pod \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.859362 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwwtl\" (UniqueName: \"kubernetes.io/projected/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-kube-api-access-dwwtl\") pod \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.859388 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-log-httpd\") pod \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.859485 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-run-httpd\") pod \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.859547 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-scripts\") pod \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.859607 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-sg-core-conf-yaml\") pod \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.859633 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-config-data\") pod \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\" (UID: \"f2accf2e-9270-4b7b-ac0e-7062b53c7cda\") " Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.862405 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2accf2e-9270-4b7b-ac0e-7062b53c7cda" (UID: "f2accf2e-9270-4b7b-ac0e-7062b53c7cda"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.879868 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2accf2e-9270-4b7b-ac0e-7062b53c7cda" (UID: "f2accf2e-9270-4b7b-ac0e-7062b53c7cda"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.881825 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-kube-api-access-dwwtl" (OuterVolumeSpecName: "kube-api-access-dwwtl") pod "f2accf2e-9270-4b7b-ac0e-7062b53c7cda" (UID: "f2accf2e-9270-4b7b-ac0e-7062b53c7cda"). InnerVolumeSpecName "kube-api-access-dwwtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.889498 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57dc94599b-bvf7j" podUID="f475ab66-31e6-46da-ad2e-8e8279e33b68" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.889700 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.890357 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"def8661c05bb4e9553cd79780f48ef13e8421f47cd09acd384a5bfeb8d63e1a4"} pod="openstack/horizon-57dc94599b-bvf7j" containerMessage="Container horizon failed startup probe, will be restarted" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.890443 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-57dc94599b-bvf7j" podUID="f475ab66-31e6-46da-ad2e-8e8279e33b68" containerName="horizon" containerID="cri-o://def8661c05bb4e9553cd79780f48ef13e8421f47cd09acd384a5bfeb8d63e1a4" gracePeriod=30 Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.899817 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-scripts" (OuterVolumeSpecName: "scripts") pod "f2accf2e-9270-4b7b-ac0e-7062b53c7cda" (UID: "f2accf2e-9270-4b7b-ac0e-7062b53c7cda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.905876 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c77557787-rb2tb"] Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.962069 4872 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.962175 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.962185 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwwtl\" (UniqueName: \"kubernetes.io/projected/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-kube-api-access-dwwtl\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.962263 4872 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.971230 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2accf2e-9270-4b7b-ac0e-7062b53c7cda" (UID: "f2accf2e-9270-4b7b-ac0e-7062b53c7cda"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:20 crc kubenswrapper[4872]: I0203 06:19:20.990547 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-config-data" (OuterVolumeSpecName: "config-data") pod "f2accf2e-9270-4b7b-ac0e-7062b53c7cda" (UID: "f2accf2e-9270-4b7b-ac0e-7062b53c7cda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.018217 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2accf2e-9270-4b7b-ac0e-7062b53c7cda" (UID: "f2accf2e-9270-4b7b-ac0e-7062b53c7cda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.079921 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.082314 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.082758 4872 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2accf2e-9270-4b7b-ac0e-7062b53c7cda-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.104418 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cmmdw"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.159372 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" event={"ID":"57418ad5-e380-4bcf-98a2-622035e88ec4","Type":"ContainerStarted","Data":"21ff971579ffbacfe9629b7e4b018a0fd9ab572047d17a1ea8770244123c0ab9"} Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.162218 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7ccbfc56-8bmzq" event={"ID":"90a09160-e06d-497b-bf29-781a4009c899","Type":"ContainerStarted","Data":"b953defe503542051c56d4d6ba94cbf87a9a33cb30054dd6a39fca78664a2e72"} Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.163484 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c77557787-rb2tb" event={"ID":"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5","Type":"ContainerStarted","Data":"3042df1ce677dc9a917f815886d8160d93dabfdce0194720ffe0d4f8296c9b55"} Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.182451 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-266wz" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.182468 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2accf2e-9270-4b7b-ac0e-7062b53c7cda","Type":"ContainerDied","Data":"5dee148751e28f6a51c0ceae9c3977434248817bd72762e464bd2f9073f521eb"} Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.182499 4872 scope.go:117] "RemoveContainer" containerID="05e6488026929d8ac2e75c24326a2a0675050a69f08ecfc11affa945d6636a72" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.182453 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.184368 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m54jj" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.208269 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-dd8d7b8db-mtx4r"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.303993 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-db-sync-config-data\") pod \"9d670aec-b637-4fe6-b046-794d9628b49b\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.304050 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d670aec-b637-4fe6-b046-794d9628b49b-etc-machine-id\") pod \"9d670aec-b637-4fe6-b046-794d9628b49b\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.304081 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-config-data\") pod \"9d670aec-b637-4fe6-b046-794d9628b49b\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.304100 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc452\" (UniqueName: \"kubernetes.io/projected/9d670aec-b637-4fe6-b046-794d9628b49b-kube-api-access-pc452\") pod \"9d670aec-b637-4fe6-b046-794d9628b49b\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.304213 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-scripts\") pod \"9d670aec-b637-4fe6-b046-794d9628b49b\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.304263 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-combined-ca-bundle\") pod \"9d670aec-b637-4fe6-b046-794d9628b49b\" (UID: \"9d670aec-b637-4fe6-b046-794d9628b49b\") " Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.327057 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d670aec-b637-4fe6-b046-794d9628b49b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9d670aec-b637-4fe6-b046-794d9628b49b" (UID: "9d670aec-b637-4fe6-b046-794d9628b49b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.360185 4872 scope.go:117] "RemoveContainer" containerID="0577a59e8d68959f7192529bba8f9c5a74fb63bc21b9264f51791cbddd61af8a" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.361763 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9d670aec-b637-4fe6-b046-794d9628b49b" (UID: "9d670aec-b637-4fe6-b046-794d9628b49b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.363125 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-scripts" (OuterVolumeSpecName: "scripts") pod "9d670aec-b637-4fe6-b046-794d9628b49b" (UID: "9d670aec-b637-4fe6-b046-794d9628b49b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.369333 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d670aec-b637-4fe6-b046-794d9628b49b-kube-api-access-pc452" (OuterVolumeSpecName: "kube-api-access-pc452") pod "9d670aec-b637-4fe6-b046-794d9628b49b" (UID: "9d670aec-b637-4fe6-b046-794d9628b49b"). InnerVolumeSpecName "kube-api-access-pc452". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.374946 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-266wz"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.398055 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-266wz"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.419162 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.419190 4872 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.419198 4872 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d670aec-b637-4fe6-b046-794d9628b49b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.419209 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc452\" (UniqueName: \"kubernetes.io/projected/9d670aec-b637-4fe6-b046-794d9628b49b-kube-api-access-pc452\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.471001 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d670aec-b637-4fe6-b046-794d9628b49b" (UID: "9d670aec-b637-4fe6-b046-794d9628b49b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.488757 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.509770 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.520735 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.530420 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-config-data" (OuterVolumeSpecName: "config-data") pod "9d670aec-b637-4fe6-b046-794d9628b49b" (UID: "9d670aec-b637-4fe6-b046-794d9628b49b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.542043 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:21 crc kubenswrapper[4872]: E0203 06:19:21.542370 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerName="sg-core" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.542380 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerName="sg-core" Feb 03 06:19:21 crc kubenswrapper[4872]: E0203 06:19:21.542394 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerName="proxy-httpd" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.542402 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerName="proxy-httpd" Feb 03 06:19:21 crc kubenswrapper[4872]: E0203 06:19:21.542409 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d670aec-b637-4fe6-b046-794d9628b49b" containerName="cinder-db-sync" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.542415 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d670aec-b637-4fe6-b046-794d9628b49b" containerName="cinder-db-sync" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.542664 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d670aec-b637-4fe6-b046-794d9628b49b" containerName="cinder-db-sync" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.542693 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerName="sg-core" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.542704 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" containerName="proxy-httpd" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.548749 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.551713 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.551950 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.573048 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.622462 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-run-httpd\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.622646 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-log-httpd\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.622855 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.622985 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk57p\" (UniqueName: \"kubernetes.io/projected/d38f107b-557f-4b56-89b4-5ecf9f235133-kube-api-access-vk57p\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.623088 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.623198 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-config-data\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.623329 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-scripts\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.623541 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d670aec-b637-4fe6-b046-794d9628b49b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.633588 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fb67d557b-lcx8h"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.712764 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b5585877d-9z5lp"] Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.734031 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-run-httpd\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.734086 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-log-httpd\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.734117 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.734161 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk57p\" (UniqueName: \"kubernetes.io/projected/d38f107b-557f-4b56-89b4-5ecf9f235133-kube-api-access-vk57p\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.734184 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.734211 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-config-data\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.734268 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-scripts\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.738050 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-run-httpd\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.738913 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-log-httpd\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.761756 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-config-data\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.763110 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.765028 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.765896 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-scripts\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.771585 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk57p\" (UniqueName: \"kubernetes.io/projected/d38f107b-557f-4b56-89b4-5ecf9f235133-kube-api-access-vk57p\") pod \"ceilometer-0\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " pod="openstack/ceilometer-0" Feb 03 06:19:21 crc kubenswrapper[4872]: I0203 06:19:21.924077 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.146717 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585fab82-3ee7-4833-9e6a-63d58e40867b" path="/var/lib/kubelet/pods/585fab82-3ee7-4833-9e6a-63d58e40867b/volumes" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.148036 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a846e1af-6b9c-4ef1-83dc-7fde36f52695" path="/var/lib/kubelet/pods/a846e1af-6b9c-4ef1-83dc-7fde36f52695/volumes" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.148477 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2accf2e-9270-4b7b-ac0e-7062b53c7cda" path="/var/lib/kubelet/pods/f2accf2e-9270-4b7b-ac0e-7062b53c7cda/volumes" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.233079 4872 generic.go:334] "Generic (PLEG): container finished" podID="57418ad5-e380-4bcf-98a2-622035e88ec4" containerID="b8889dc62e8daa76047c99214aa0b28c4c3f66f9c007b364a56e7f027650ebc0" exitCode=0 Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.233181 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" event={"ID":"57418ad5-e380-4bcf-98a2-622035e88ec4","Type":"ContainerDied","Data":"b8889dc62e8daa76047c99214aa0b28c4c3f66f9c007b364a56e7f027650ebc0"} Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.248488 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" event={"ID":"db9d16c1-a901-456f-a48d-a56879b49c8d","Type":"ContainerStarted","Data":"cd5c2888fa52c4ae967a8235f9d123c3b97bfecc28ad852a9a329e8feaf8134a"} Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.282738 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m54jj" event={"ID":"9d670aec-b637-4fe6-b046-794d9628b49b","Type":"ContainerDied","Data":"fb31fd8f3767e9facceaf103b2500726ac80378d7ad850357fe51a0a22f0af9a"} Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.282776 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb31fd8f3767e9facceaf103b2500726ac80378d7ad850357fe51a0a22f0af9a" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.282859 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m54jj" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.294213 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b5585877d-9z5lp" event={"ID":"683c6fc3-d9ec-4e50-9c36-22a929897924","Type":"ContainerStarted","Data":"87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a"} Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.294256 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b5585877d-9z5lp" event={"ID":"683c6fc3-d9ec-4e50-9c36-22a929897924","Type":"ContainerStarted","Data":"d8b17f8b75c269b32179beb6de7d299b46f8b82b0a12ac6ccb64980158f57c08"} Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.295617 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fb67d557b-lcx8h" event={"ID":"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9","Type":"ContainerStarted","Data":"3c062693bb24ee56336c9c2463c9530f815effc468d340d54d6d625073d6ecda"} Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.295640 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fb67d557b-lcx8h" event={"ID":"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9","Type":"ContainerStarted","Data":"982f5608d4262266ff8e40ca7f2115b8048ce8a8f1c185ad1718c799aff0e756"} Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.317498 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f7ccbfc56-8bmzq" event={"ID":"90a09160-e06d-497b-bf29-781a4009c899","Type":"ContainerStarted","Data":"0f8c69c62d45a39f85c5b2efda2147d247496a511ecaada7530990820f6df6e0"} Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.317665 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.381289 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f7ccbfc56-8bmzq" podStartSLOduration=5.381269097 podStartE2EDuration="5.381269097s" podCreationTimestamp="2026-02-03 06:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:22.334833636 +0000 UTC m=+1132.917525050" watchObservedRunningTime="2026-02-03 06:19:22.381269097 +0000 UTC m=+1132.963960511" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.447044 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.448471 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.482726 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.482920 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.483938 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fd9l2" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.483938 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.484113 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.484166 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.502983 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.510338 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.510437 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.667797 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-scripts\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.667915 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.667947 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdrm4\" (UniqueName: \"kubernetes.io/projected/c195a99e-dc2e-4e20-bcf8-8b40a1497680-kube-api-access-gdrm4\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.668077 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.668103 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c195a99e-dc2e-4e20-bcf8-8b40a1497680-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.668118 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.674775 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cmmdw"] Feb 03 06:19:22 crc kubenswrapper[4872]: W0203 06:19:22.742488 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38f107b_557f_4b56_89b4_5ecf9f235133.slice/crio-e94e3f35161c038082cd16179e1dbcda02f60da3ba5e8b9bbb30ff6a1d8048b5 WatchSource:0}: Error finding container e94e3f35161c038082cd16179e1dbcda02f60da3ba5e8b9bbb30ff6a1d8048b5: Status 404 returned error can't find the container with id e94e3f35161c038082cd16179e1dbcda02f60da3ba5e8b9bbb30ff6a1d8048b5 Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.756922 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.781844 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-scripts\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.782102 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.788230 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-scripts\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.790309 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.798652 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdrm4\" (UniqueName: \"kubernetes.io/projected/c195a99e-dc2e-4e20-bcf8-8b40a1497680-kube-api-access-gdrm4\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.799138 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.799191 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.799212 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c195a99e-dc2e-4e20-bcf8-8b40a1497680-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.799386 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c195a99e-dc2e-4e20-bcf8-8b40a1497680-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.806279 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.814746 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mgwfc"] Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.824919 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.826398 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.854957 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdrm4\" (UniqueName: \"kubernetes.io/projected/c195a99e-dc2e-4e20-bcf8-8b40a1497680-kube-api-access-gdrm4\") pod \"cinder-scheduler-0\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.869297 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mgwfc"] Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.907767 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.907820 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.907876 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dtgb\" (UniqueName: \"kubernetes.io/projected/b172b5dc-48c4-43ce-a949-7dec7d0e7567-kube-api-access-7dtgb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.907913 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.907946 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:22 crc kubenswrapper[4872]: I0203 06:19:22.907982 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-config\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.011459 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.011718 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-config\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.011852 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.011922 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.012039 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dtgb\" (UniqueName: \"kubernetes.io/projected/b172b5dc-48c4-43ce-a949-7dec7d0e7567-kube-api-access-7dtgb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.012152 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.012960 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.013224 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.019882 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.023133 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.028362 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-config\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.058404 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dtgb\" (UniqueName: \"kubernetes.io/projected/b172b5dc-48c4-43ce-a949-7dec7d0e7567-kube-api-access-7dtgb\") pod \"dnsmasq-dns-6578955fd5-mgwfc\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.079216 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.130092 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.164989 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.166483 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.171322 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.180998 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.214912 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.220377 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-scripts\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.220420 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.220444 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.220460 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data-custom\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.220490 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d661c4f7-cd02-4399-afe1-8d073e11fe69-logs\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.220532 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d661c4f7-cd02-4399-afe1-8d073e11fe69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.220561 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhvcg\" (UniqueName: \"kubernetes.io/projected/d661c4f7-cd02-4399-afe1-8d073e11fe69-kube-api-access-mhvcg\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.221324 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.324382 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.324726 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data-custom\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.324765 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d661c4f7-cd02-4399-afe1-8d073e11fe69-logs\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.324813 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d661c4f7-cd02-4399-afe1-8d073e11fe69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.324845 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhvcg\" (UniqueName: \"kubernetes.io/projected/d661c4f7-cd02-4399-afe1-8d073e11fe69-kube-api-access-mhvcg\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.326762 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-scripts\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.326804 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.327116 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d661c4f7-cd02-4399-afe1-8d073e11fe69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.327409 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d661c4f7-cd02-4399-afe1-8d073e11fe69-logs\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.330857 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.336566 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-scripts\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.357560 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.357851 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data-custom\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.390654 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhvcg\" (UniqueName: \"kubernetes.io/projected/d661c4f7-cd02-4399-afe1-8d073e11fe69-kube-api-access-mhvcg\") pod \"cinder-api-0\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " pod="openstack/cinder-api-0" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.434188 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerStarted","Data":"e94e3f35161c038082cd16179e1dbcda02f60da3ba5e8b9bbb30ff6a1d8048b5"} Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.435615 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:23 crc kubenswrapper[4872]: I0203 06:19:23.524048 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.036589 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.100834 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mgwfc"] Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.349652 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.449242 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerStarted","Data":"0a6156556717d23db672daf90649eef056db7d825d186e86e07aab6f3f011879"} Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.455976 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c195a99e-dc2e-4e20-bcf8-8b40a1497680","Type":"ContainerStarted","Data":"7dfa180d16e0b3903c3c38d86d40fdbb5200f4ce8b6d672931e34bda2fefc091"} Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.457726 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" event={"ID":"b172b5dc-48c4-43ce-a949-7dec7d0e7567","Type":"ContainerStarted","Data":"def04805f95880b9a9e777f8edaa7c8d0dc226cd9d32b21989bc25beb171b7ee"} Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.497418 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b5585877d-9z5lp" event={"ID":"683c6fc3-d9ec-4e50-9c36-22a929897924","Type":"ContainerStarted","Data":"076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92"} Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.497498 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.497537 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.504299 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fb67d557b-lcx8h" event={"ID":"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9","Type":"ContainerStarted","Data":"e416fa1d39f57e8f200bd6f2fc35376c79265cec30a52203cec574146c5e1330"} Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.505047 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.506934 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" podUID="57418ad5-e380-4bcf-98a2-622035e88ec4" containerName="dnsmasq-dns" containerID="cri-o://553834d94deb59922435268d6ee1990ac15f482ce2ddb6120cfd05aec6568be0" gracePeriod=10 Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.507140 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" event={"ID":"57418ad5-e380-4bcf-98a2-622035e88ec4","Type":"ContainerStarted","Data":"553834d94deb59922435268d6ee1990ac15f482ce2ddb6120cfd05aec6568be0"} Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.507168 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.523840 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b5585877d-9z5lp" podStartSLOduration=5.523819971 podStartE2EDuration="5.523819971s" podCreationTimestamp="2026-02-03 06:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:24.51674608 +0000 UTC m=+1135.099437494" watchObservedRunningTime="2026-02-03 06:19:24.523819971 +0000 UTC m=+1135.106511385" Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.537364 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fb67d557b-lcx8h" podStartSLOduration=5.537346717 podStartE2EDuration="5.537346717s" podCreationTimestamp="2026-02-03 06:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:24.535678716 +0000 UTC m=+1135.118370130" watchObservedRunningTime="2026-02-03 06:19:24.537346717 +0000 UTC m=+1135.120038131" Feb 03 06:19:24 crc kubenswrapper[4872]: I0203 06:19:24.568351 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" podStartSLOduration=5.568333164 podStartE2EDuration="5.568333164s" podCreationTimestamp="2026-02-03 06:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:24.558928577 +0000 UTC m=+1135.141619991" watchObservedRunningTime="2026-02-03 06:19:24.568333164 +0000 UTC m=+1135.151024578" Feb 03 06:19:25 crc kubenswrapper[4872]: I0203 06:19:25.559523 4872 generic.go:334] "Generic (PLEG): container finished" podID="57418ad5-e380-4bcf-98a2-622035e88ec4" containerID="553834d94deb59922435268d6ee1990ac15f482ce2ddb6120cfd05aec6568be0" exitCode=0 Feb 03 06:19:25 crc kubenswrapper[4872]: I0203 06:19:25.559702 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" event={"ID":"57418ad5-e380-4bcf-98a2-622035e88ec4","Type":"ContainerDied","Data":"553834d94deb59922435268d6ee1990ac15f482ce2ddb6120cfd05aec6568be0"} Feb 03 06:19:25 crc kubenswrapper[4872]: I0203 06:19:25.572857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d661c4f7-cd02-4399-afe1-8d073e11fe69","Type":"ContainerStarted","Data":"ed0dbd051aab706c08d129090fefdfbdd74a6b6a2d036b77f5d6cd142ffb11a5"} Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.144245 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.200217 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-swift-storage-0\") pod \"57418ad5-e380-4bcf-98a2-622035e88ec4\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.200493 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-nb\") pod \"57418ad5-e380-4bcf-98a2-622035e88ec4\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.200576 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-config\") pod \"57418ad5-e380-4bcf-98a2-622035e88ec4\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.200609 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-sb\") pod \"57418ad5-e380-4bcf-98a2-622035e88ec4\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.200632 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-svc\") pod \"57418ad5-e380-4bcf-98a2-622035e88ec4\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.200711 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt9km\" (UniqueName: \"kubernetes.io/projected/57418ad5-e380-4bcf-98a2-622035e88ec4-kube-api-access-vt9km\") pod \"57418ad5-e380-4bcf-98a2-622035e88ec4\" (UID: \"57418ad5-e380-4bcf-98a2-622035e88ec4\") " Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.265159 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57418ad5-e380-4bcf-98a2-622035e88ec4-kube-api-access-vt9km" (OuterVolumeSpecName: "kube-api-access-vt9km") pod "57418ad5-e380-4bcf-98a2-622035e88ec4" (UID: "57418ad5-e380-4bcf-98a2-622035e88ec4"). InnerVolumeSpecName "kube-api-access-vt9km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.303270 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt9km\" (UniqueName: \"kubernetes.io/projected/57418ad5-e380-4bcf-98a2-622035e88ec4-kube-api-access-vt9km\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.305325 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-config" (OuterVolumeSpecName: "config") pod "57418ad5-e380-4bcf-98a2-622035e88ec4" (UID: "57418ad5-e380-4bcf-98a2-622035e88ec4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.347897 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57418ad5-e380-4bcf-98a2-622035e88ec4" (UID: "57418ad5-e380-4bcf-98a2-622035e88ec4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.348915 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57418ad5-e380-4bcf-98a2-622035e88ec4" (UID: "57418ad5-e380-4bcf-98a2-622035e88ec4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.353179 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "57418ad5-e380-4bcf-98a2-622035e88ec4" (UID: "57418ad5-e380-4bcf-98a2-622035e88ec4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.354547 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57418ad5-e380-4bcf-98a2-622035e88ec4" (UID: "57418ad5-e380-4bcf-98a2-622035e88ec4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.404660 4872 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.404719 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.404733 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.404743 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.404753 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57418ad5-e380-4bcf-98a2-622035e88ec4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.597312 4872 generic.go:334] "Generic (PLEG): container finished" podID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" containerID="5117d81e75302041fdae2a1efd1eae7732a9e81c62cafefb06676cd851ee9ee5" exitCode=0 Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.597394 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" event={"ID":"b172b5dc-48c4-43ce-a949-7dec7d0e7567","Type":"ContainerDied","Data":"5117d81e75302041fdae2a1efd1eae7732a9e81c62cafefb06676cd851ee9ee5"} Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.604608 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.604747 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-cmmdw" event={"ID":"57418ad5-e380-4bcf-98a2-622035e88ec4","Type":"ContainerDied","Data":"21ff971579ffbacfe9629b7e4b018a0fd9ab572047d17a1ea8770244123c0ab9"} Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.604813 4872 scope.go:117] "RemoveContainer" containerID="553834d94deb59922435268d6ee1990ac15f482ce2ddb6120cfd05aec6568be0" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.672813 4872 scope.go:117] "RemoveContainer" containerID="b8889dc62e8daa76047c99214aa0b28c4c3f66f9c007b364a56e7f027650ebc0" Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.693835 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cmmdw"] Feb 03 06:19:26 crc kubenswrapper[4872]: I0203 06:19:26.715643 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-cmmdw"] Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.549340 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.632391 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c77557787-rb2tb" event={"ID":"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5","Type":"ContainerStarted","Data":"1006321088beb7656d99c9d6528c731a75b5ff908839b4fe843596c175991486"} Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.638177 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerStarted","Data":"d8b986f1dc2139eae0f150ec499692542c00c7a32fc9258d14073ee0ab04f552"} Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.723559 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84b5664f65-lkwpl"] Feb 03 06:19:27 crc kubenswrapper[4872]: E0203 06:19:27.723958 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57418ad5-e380-4bcf-98a2-622035e88ec4" containerName="init" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.723972 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="57418ad5-e380-4bcf-98a2-622035e88ec4" containerName="init" Feb 03 06:19:27 crc kubenswrapper[4872]: E0203 06:19:27.723984 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57418ad5-e380-4bcf-98a2-622035e88ec4" containerName="dnsmasq-dns" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.723990 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="57418ad5-e380-4bcf-98a2-622035e88ec4" containerName="dnsmasq-dns" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.724167 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="57418ad5-e380-4bcf-98a2-622035e88ec4" containerName="dnsmasq-dns" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.725052 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.733501 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c195a99e-dc2e-4e20-bcf8-8b40a1497680","Type":"ContainerStarted","Data":"5fc06134d723ce29f8b489386a89af06e0f2a6e0cc8c87d038da2a6c1515e82a"} Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.789291 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.789601 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.792066 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84b5664f65-lkwpl"] Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.802380 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d661c4f7-cd02-4399-afe1-8d073e11fe69","Type":"ContainerStarted","Data":"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f"} Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.807458 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" event={"ID":"b172b5dc-48c4-43ce-a949-7dec7d0e7567","Type":"ContainerStarted","Data":"79b34c2e7fad425d1840ba85b1619afeb5c812d69861a010608d1e622f1a5fd5"} Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.808001 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.843958 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" event={"ID":"db9d16c1-a901-456f-a48d-a56879b49c8d","Type":"ContainerStarted","Data":"d06ecdbe5a16206d824eebc25187d4986c7ccb30165b28444d5df0c3e5f1b0fb"} Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.845003 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" event={"ID":"db9d16c1-a901-456f-a48d-a56879b49c8d","Type":"ContainerStarted","Data":"ec57bf4a0fdb1a8c4f66f62a8b2ffd0fcf8e7df5da9dbbebe9e25965884fb4f7"} Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.861592 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-combined-ca-bundle\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.861635 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdrjc\" (UniqueName: \"kubernetes.io/projected/23d834e1-1b59-4463-b893-fd23fa1e7ecd-kube-api-access-qdrjc\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.863149 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-httpd-config\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.863212 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-internal-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.863238 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-config\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.863411 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-ovndb-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.863447 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-public-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.879151 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" podStartSLOduration=5.879134088 podStartE2EDuration="5.879134088s" podCreationTimestamp="2026-02-03 06:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:27.844355609 +0000 UTC m=+1138.427047023" watchObservedRunningTime="2026-02-03 06:19:27.879134088 +0000 UTC m=+1138.461825502" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.881566 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-dd8d7b8db-mtx4r" podStartSLOduration=3.840814547 podStartE2EDuration="8.881560827s" podCreationTimestamp="2026-02-03 06:19:19 +0000 UTC" firstStartedPulling="2026-02-03 06:19:21.361718611 +0000 UTC m=+1131.944410025" lastFinishedPulling="2026-02-03 06:19:26.402464891 +0000 UTC m=+1136.985156305" observedRunningTime="2026-02-03 06:19:27.860639052 +0000 UTC m=+1138.443330466" watchObservedRunningTime="2026-02-03 06:19:27.881560827 +0000 UTC m=+1138.464252241" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.965865 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-internal-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.966090 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-config\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.966309 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-ovndb-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.966388 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-public-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.966469 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-combined-ca-bundle\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.966537 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdrjc\" (UniqueName: \"kubernetes.io/projected/23d834e1-1b59-4463-b893-fd23fa1e7ecd-kube-api-access-qdrjc\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.966657 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-httpd-config\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.990324 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-ovndb-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:27 crc kubenswrapper[4872]: I0203 06:19:27.990413 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-combined-ca-bundle\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.002553 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-internal-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.003238 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-config\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.003807 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-httpd-config\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.011534 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdrjc\" (UniqueName: \"kubernetes.io/projected/23d834e1-1b59-4463-b893-fd23fa1e7ecd-kube-api-access-qdrjc\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.012008 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d834e1-1b59-4463-b893-fd23fa1e7ecd-public-tls-certs\") pod \"neutron-84b5664f65-lkwpl\" (UID: \"23d834e1-1b59-4463-b893-fd23fa1e7ecd\") " pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.122950 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.151664 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57418ad5-e380-4bcf-98a2-622035e88ec4" path="/var/lib/kubelet/pods/57418ad5-e380-4bcf-98a2-622035e88ec4/volumes" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.814128 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84b5664f65-lkwpl"] Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.861240 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84b5664f65-lkwpl" event={"ID":"23d834e1-1b59-4463-b893-fd23fa1e7ecd","Type":"ContainerStarted","Data":"96e3cdf97c1b64502dc8828d12288e3ca73790cbea88e04a4432c267af74a2f3"} Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.866210 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c77557787-rb2tb" event={"ID":"8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5","Type":"ContainerStarted","Data":"d3772638646851f1080af86ed7a2ae05ac082129f8f11d920c279bc35f5e7932"} Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.877475 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerStarted","Data":"9090aacdcc4f851fd277b0557a76addd5ba0c20342b56fd3815356336d263d9e"} Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.884464 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c195a99e-dc2e-4e20-bcf8-8b40a1497680","Type":"ContainerStarted","Data":"f6d2a6bd7de079787794a4674c971df980b5b59b311a120a1e228591de25e7d4"} Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.894598 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerName="cinder-api-log" containerID="cri-o://43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f" gracePeriod=30 Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.894708 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d661c4f7-cd02-4399-afe1-8d073e11fe69","Type":"ContainerStarted","Data":"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101"} Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.894928 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.894941 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerName="cinder-api" containerID="cri-o://8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101" gracePeriod=30 Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.896455 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6c77557787-rb2tb" podStartSLOduration=4.586045109 podStartE2EDuration="9.896443509s" podCreationTimestamp="2026-02-03 06:19:19 +0000 UTC" firstStartedPulling="2026-02-03 06:19:20.954622016 +0000 UTC m=+1131.537313420" lastFinishedPulling="2026-02-03 06:19:26.265020406 +0000 UTC m=+1136.847711820" observedRunningTime="2026-02-03 06:19:28.887030332 +0000 UTC m=+1139.469721746" watchObservedRunningTime="2026-02-03 06:19:28.896443509 +0000 UTC m=+1139.479134923" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.920622 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.624251339 podStartE2EDuration="6.920605611s" podCreationTimestamp="2026-02-03 06:19:22 +0000 UTC" firstStartedPulling="2026-02-03 06:19:24.10536695 +0000 UTC m=+1134.688058364" lastFinishedPulling="2026-02-03 06:19:26.401721222 +0000 UTC m=+1136.984412636" observedRunningTime="2026-02-03 06:19:28.914665408 +0000 UTC m=+1139.497356832" watchObservedRunningTime="2026-02-03 06:19:28.920605611 +0000 UTC m=+1139.503297025" Feb 03 06:19:28 crc kubenswrapper[4872]: I0203 06:19:28.947938 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.94792109 podStartE2EDuration="5.94792109s" podCreationTimestamp="2026-02-03 06:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:28.931097375 +0000 UTC m=+1139.513788789" watchObservedRunningTime="2026-02-03 06:19:28.94792109 +0000 UTC m=+1139.530612504" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.870735 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.904938 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84b5664f65-lkwpl" event={"ID":"23d834e1-1b59-4463-b893-fd23fa1e7ecd","Type":"ContainerStarted","Data":"e40ecd4ea22752acd61f4f592a1955fa0ac8504b464b5fd870e5db422063e4d6"} Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.904981 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84b5664f65-lkwpl" event={"ID":"23d834e1-1b59-4463-b893-fd23fa1e7ecd","Type":"ContainerStarted","Data":"c34084908b490b1917689b518372222059c409d187433f0d0a27d0a37621a029"} Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.905348 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.909548 4872 generic.go:334] "Generic (PLEG): container finished" podID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerID="8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101" exitCode=0 Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.909573 4872 generic.go:334] "Generic (PLEG): container finished" podID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerID="43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f" exitCode=143 Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.909653 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.910759 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d661c4f7-cd02-4399-afe1-8d073e11fe69","Type":"ContainerDied","Data":"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101"} Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.910799 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d661c4f7-cd02-4399-afe1-8d073e11fe69","Type":"ContainerDied","Data":"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f"} Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.910810 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d661c4f7-cd02-4399-afe1-8d073e11fe69","Type":"ContainerDied","Data":"ed0dbd051aab706c08d129090fefdfbdd74a6b6a2d036b77f5d6cd142ffb11a5"} Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.910825 4872 scope.go:117] "RemoveContainer" containerID="8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.944633 4872 scope.go:117] "RemoveContainer" containerID="43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.953999 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84b5664f65-lkwpl" podStartSLOduration=2.953986419 podStartE2EDuration="2.953986419s" podCreationTimestamp="2026-02-03 06:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:29.945085295 +0000 UTC m=+1140.527776709" watchObservedRunningTime="2026-02-03 06:19:29.953986419 +0000 UTC m=+1140.536677833" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.965199 4872 scope.go:117] "RemoveContainer" containerID="8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101" Feb 03 06:19:29 crc kubenswrapper[4872]: E0203 06:19:29.965767 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101\": container with ID starting with 8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101 not found: ID does not exist" containerID="8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.965810 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101"} err="failed to get container status \"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101\": rpc error: code = NotFound desc = could not find container \"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101\": container with ID starting with 8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101 not found: ID does not exist" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.965844 4872 scope.go:117] "RemoveContainer" containerID="43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f" Feb 03 06:19:29 crc kubenswrapper[4872]: E0203 06:19:29.966101 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f\": container with ID starting with 43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f not found: ID does not exist" containerID="43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.966122 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f"} err="failed to get container status \"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f\": rpc error: code = NotFound desc = could not find container \"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f\": container with ID starting with 43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f not found: ID does not exist" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.966135 4872 scope.go:117] "RemoveContainer" containerID="8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.966310 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101"} err="failed to get container status \"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101\": rpc error: code = NotFound desc = could not find container \"8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101\": container with ID starting with 8a87054f6569b08a4905c46a37a974df2b39b770db28647893ea774b10bf5101 not found: ID does not exist" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.966330 4872 scope.go:117] "RemoveContainer" containerID="43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f" Feb 03 06:19:29 crc kubenswrapper[4872]: I0203 06:19:29.966541 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f"} err="failed to get container status \"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f\": rpc error: code = NotFound desc = could not find container \"43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f\": container with ID starting with 43b0a6938d4d19b37ed64ca4af1e6c5c07eb9cea5dfb6d5a486676b803101e0f not found: ID does not exist" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.025086 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d661c4f7-cd02-4399-afe1-8d073e11fe69-etc-machine-id\") pod \"d661c4f7-cd02-4399-afe1-8d073e11fe69\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.025167 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d661c4f7-cd02-4399-afe1-8d073e11fe69-logs\") pod \"d661c4f7-cd02-4399-afe1-8d073e11fe69\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.025203 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data-custom\") pod \"d661c4f7-cd02-4399-afe1-8d073e11fe69\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.025233 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-scripts\") pod \"d661c4f7-cd02-4399-afe1-8d073e11fe69\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.025283 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhvcg\" (UniqueName: \"kubernetes.io/projected/d661c4f7-cd02-4399-afe1-8d073e11fe69-kube-api-access-mhvcg\") pod \"d661c4f7-cd02-4399-afe1-8d073e11fe69\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.025349 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data\") pod \"d661c4f7-cd02-4399-afe1-8d073e11fe69\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.025383 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-combined-ca-bundle\") pod \"d661c4f7-cd02-4399-afe1-8d073e11fe69\" (UID: \"d661c4f7-cd02-4399-afe1-8d073e11fe69\") " Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.027029 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d661c4f7-cd02-4399-afe1-8d073e11fe69-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d661c4f7-cd02-4399-afe1-8d073e11fe69" (UID: "d661c4f7-cd02-4399-afe1-8d073e11fe69"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.027219 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d661c4f7-cd02-4399-afe1-8d073e11fe69-logs" (OuterVolumeSpecName: "logs") pod "d661c4f7-cd02-4399-afe1-8d073e11fe69" (UID: "d661c4f7-cd02-4399-afe1-8d073e11fe69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.031471 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d661c4f7-cd02-4399-afe1-8d073e11fe69-kube-api-access-mhvcg" (OuterVolumeSpecName: "kube-api-access-mhvcg") pod "d661c4f7-cd02-4399-afe1-8d073e11fe69" (UID: "d661c4f7-cd02-4399-afe1-8d073e11fe69"). InnerVolumeSpecName "kube-api-access-mhvcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.035765 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d661c4f7-cd02-4399-afe1-8d073e11fe69" (UID: "d661c4f7-cd02-4399-afe1-8d073e11fe69"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.037933 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-scripts" (OuterVolumeSpecName: "scripts") pod "d661c4f7-cd02-4399-afe1-8d073e11fe69" (UID: "d661c4f7-cd02-4399-afe1-8d073e11fe69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.081848 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data" (OuterVolumeSpecName: "config-data") pod "d661c4f7-cd02-4399-afe1-8d073e11fe69" (UID: "d661c4f7-cd02-4399-afe1-8d073e11fe69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.085677 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d661c4f7-cd02-4399-afe1-8d073e11fe69" (UID: "d661c4f7-cd02-4399-afe1-8d073e11fe69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.126976 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.127005 4872 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d661c4f7-cd02-4399-afe1-8d073e11fe69-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.127015 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d661c4f7-cd02-4399-afe1-8d073e11fe69-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.127024 4872 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.127032 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.127041 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhvcg\" (UniqueName: \"kubernetes.io/projected/d661c4f7-cd02-4399-afe1-8d073e11fe69-kube-api-access-mhvcg\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.127054 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c4f7-cd02-4399-afe1-8d073e11fe69-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.227772 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.240191 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.251083 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:30 crc kubenswrapper[4872]: E0203 06:19:30.251441 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerName="cinder-api" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.251464 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerName="cinder-api" Feb 03 06:19:30 crc kubenswrapper[4872]: E0203 06:19:30.251483 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerName="cinder-api-log" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.251492 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerName="cinder-api-log" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.251663 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerName="cinder-api-log" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.251678 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" containerName="cinder-api" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.252720 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.256756 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.256994 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.257190 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.325888 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.330762 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvgk\" (UniqueName: \"kubernetes.io/projected/e1031a8c-c3fb-4022-826e-77509f2a2b2f-kube-api-access-srvgk\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.330807 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.330844 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.330931 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1031a8c-c3fb-4022-826e-77509f2a2b2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.330972 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-config-data\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.331036 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1031a8c-c3fb-4022-826e-77509f2a2b2f-logs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.331060 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.331079 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-scripts\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.331097 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432573 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1031a8c-c3fb-4022-826e-77509f2a2b2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432670 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-config-data\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432760 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1031a8c-c3fb-4022-826e-77509f2a2b2f-logs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432797 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432819 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432834 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-scripts\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432877 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srvgk\" (UniqueName: \"kubernetes.io/projected/e1031a8c-c3fb-4022-826e-77509f2a2b2f-kube-api-access-srvgk\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432895 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.432918 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.433508 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1031a8c-c3fb-4022-826e-77509f2a2b2f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.433790 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1031a8c-c3fb-4022-826e-77509f2a2b2f-logs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.438516 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.439248 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.441778 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.442012 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-scripts\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.453189 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srvgk\" (UniqueName: \"kubernetes.io/projected/e1031a8c-c3fb-4022-826e-77509f2a2b2f-kube-api-access-srvgk\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.474095 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-config-data\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.497211 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1031a8c-c3fb-4022-826e-77509f2a2b2f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e1031a8c-c3fb-4022-826e-77509f2a2b2f\") " pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.575372 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.719051 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8447df9874-g22nb"] Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.728082 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.732279 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.733905 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.772832 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8447df9874-g22nb"] Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.841530 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-logs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.841585 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-internal-tls-certs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.841617 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrnv7\" (UniqueName: \"kubernetes.io/projected/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-kube-api-access-nrnv7\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.841700 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-combined-ca-bundle\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.841725 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-config-data\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.841825 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-public-tls-certs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.841857 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-config-data-custom\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.947170 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-logs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.947480 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-internal-tls-certs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.947503 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrnv7\" (UniqueName: \"kubernetes.io/projected/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-kube-api-access-nrnv7\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.947574 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-combined-ca-bundle\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.947595 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-config-data\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.947627 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-public-tls-certs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.947658 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-config-data-custom\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.952026 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-logs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.956278 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-config-data-custom\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.958160 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-internal-tls-certs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.962921 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-config-data\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.965108 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-public-tls-certs\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.979246 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrnv7\" (UniqueName: \"kubernetes.io/projected/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-kube-api-access-nrnv7\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:30 crc kubenswrapper[4872]: I0203 06:19:30.958333 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bfff84-36be-489c-85a2-6e4ebfec4d1a-combined-ca-bundle\") pod \"barbican-api-8447df9874-g22nb\" (UID: \"f3bfff84-36be-489c-85a2-6e4ebfec4d1a\") " pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:31 crc kubenswrapper[4872]: I0203 06:19:31.054483 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:31 crc kubenswrapper[4872]: I0203 06:19:31.285019 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 06:19:31 crc kubenswrapper[4872]: I0203 06:19:31.614617 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8447df9874-g22nb"] Feb 03 06:19:32 crc kubenswrapper[4872]: I0203 06:19:32.016224 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1031a8c-c3fb-4022-826e-77509f2a2b2f","Type":"ContainerStarted","Data":"113ef499b8f62ae5874a28e5d9c8827a8bb8c7ae858c1e460600b84785552f79"} Feb 03 06:19:32 crc kubenswrapper[4872]: I0203 06:19:32.021808 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8447df9874-g22nb" event={"ID":"f3bfff84-36be-489c-85a2-6e4ebfec4d1a","Type":"ContainerStarted","Data":"3c4a840e3a98b83e8f6feac4e5d605b613ddd44c6a4b7ee0e699f3dce51efc72"} Feb 03 06:19:32 crc kubenswrapper[4872]: I0203 06:19:32.021850 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8447df9874-g22nb" event={"ID":"f3bfff84-36be-489c-85a2-6e4ebfec4d1a","Type":"ContainerStarted","Data":"24d7195b34297a8f8bacbcef788195a5e7310c5b4fa7d6f67ddd8353469598af"} Feb 03 06:19:32 crc kubenswrapper[4872]: I0203 06:19:32.026107 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerStarted","Data":"6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48"} Feb 03 06:19:32 crc kubenswrapper[4872]: I0203 06:19:32.027176 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 06:19:32 crc kubenswrapper[4872]: I0203 06:19:32.132771 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d661c4f7-cd02-4399-afe1-8d073e11fe69" path="/var/lib/kubelet/pods/d661c4f7-cd02-4399-afe1-8d073e11fe69/volumes" Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.046054 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1031a8c-c3fb-4022-826e-77509f2a2b2f","Type":"ContainerStarted","Data":"345e84b6c4295452b8cebb3397d5ec70017a1d85e731c7a379ed2d755940de51"} Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.058644 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8447df9874-g22nb" event={"ID":"f3bfff84-36be-489c-85a2-6e4ebfec4d1a","Type":"ContainerStarted","Data":"e5dddae99140f55b0ffbe30802396d9faaa4060251531be7e4068d24b354548c"} Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.059090 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.059259 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.079391 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.463010107 podStartE2EDuration="12.079374933s" podCreationTimestamp="2026-02-03 06:19:21 +0000 UTC" firstStartedPulling="2026-02-03 06:19:22.785295669 +0000 UTC m=+1133.367987083" lastFinishedPulling="2026-02-03 06:19:30.401660495 +0000 UTC m=+1140.984351909" observedRunningTime="2026-02-03 06:19:32.055516425 +0000 UTC m=+1142.638207849" watchObservedRunningTime="2026-02-03 06:19:33.079374933 +0000 UTC m=+1143.662066347" Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.081887 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8447df9874-g22nb" podStartSLOduration=3.081879634 podStartE2EDuration="3.081879634s" podCreationTimestamp="2026-02-03 06:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:33.078255367 +0000 UTC m=+1143.660946781" watchObservedRunningTime="2026-02-03 06:19:33.081879634 +0000 UTC m=+1143.664571048" Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.131020 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.174845 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.317828 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-gjzcr"] Feb 03 06:19:33 crc kubenswrapper[4872]: I0203 06:19:33.318106 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" podUID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" containerName="dnsmasq-dns" containerID="cri-o://2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328" gracePeriod=10 Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.015616 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.069376 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e1031a8c-c3fb-4022-826e-77509f2a2b2f","Type":"ContainerStarted","Data":"458d092b9548b3714c5911b8cc56164ee65b9f52261b480aa77c87f1e05524ef"} Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.069508 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.071388 4872 generic.go:334] "Generic (PLEG): container finished" podID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" containerID="2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328" exitCode=0 Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.071440 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.071548 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" event={"ID":"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef","Type":"ContainerDied","Data":"2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328"} Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.071607 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-gjzcr" event={"ID":"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef","Type":"ContainerDied","Data":"b4d5936f32ed0d48f1e91c54cf40dade35fd13b2fdaef02a893a0e347bd7a5be"} Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.071630 4872 scope.go:117] "RemoveContainer" containerID="2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.075842 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.091099 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.091084369 podStartE2EDuration="4.091084369s" podCreationTimestamp="2026-02-03 06:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:34.088078487 +0000 UTC m=+1144.670769901" watchObservedRunningTime="2026-02-03 06:19:34.091084369 +0000 UTC m=+1144.673775783" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.108528 4872 scope.go:117] "RemoveContainer" containerID="3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.136294 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-nb\") pod \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.136365 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-svc\") pod \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.136448 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28gj7\" (UniqueName: \"kubernetes.io/projected/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-kube-api-access-28gj7\") pod \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.136521 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-config\") pod \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.136571 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-sb\") pod \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.136603 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-swift-storage-0\") pod \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\" (UID: \"c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef\") " Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.150857 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-kube-api-access-28gj7" (OuterVolumeSpecName: "kube-api-access-28gj7") pod "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" (UID: "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef"). InnerVolumeSpecName "kube-api-access-28gj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.158073 4872 scope.go:117] "RemoveContainer" containerID="2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328" Feb 03 06:19:34 crc kubenswrapper[4872]: E0203 06:19:34.160862 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328\": container with ID starting with 2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328 not found: ID does not exist" containerID="2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.160901 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328"} err="failed to get container status \"2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328\": rpc error: code = NotFound desc = could not find container \"2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328\": container with ID starting with 2e1c7cb02e47c8e73f5cf4063be697a32bf9e5b1440e9fa781e8c4cae495c328 not found: ID does not exist" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.160925 4872 scope.go:117] "RemoveContainer" containerID="3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86" Feb 03 06:19:34 crc kubenswrapper[4872]: E0203 06:19:34.162600 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86\": container with ID starting with 3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86 not found: ID does not exist" containerID="3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.162632 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86"} err="failed to get container status \"3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86\": rpc error: code = NotFound desc = could not find container \"3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86\": container with ID starting with 3ea11bf62592b9075fc2ac5ea45a1f67897ba216d401fb0bd86e423f2efa0a86 not found: ID does not exist" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.225354 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" (UID: "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.230244 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" (UID: "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.238434 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-config" (OuterVolumeSpecName: "config") pod "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" (UID: "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.238893 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28gj7\" (UniqueName: \"kubernetes.io/projected/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-kube-api-access-28gj7\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.238923 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.238932 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.238940 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.261368 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" (UID: "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.263807 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-b5585877d-9z5lp" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.264239 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-b5585877d-9z5lp" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.266160 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" (UID: "c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.340613 4872 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.340650 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.399935 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-gjzcr"] Feb 03 06:19:34 crc kubenswrapper[4872]: I0203 06:19:34.406542 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-gjzcr"] Feb 03 06:19:35 crc kubenswrapper[4872]: I0203 06:19:35.085218 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6447fd6947-kkqrr" Feb 03 06:19:35 crc kubenswrapper[4872]: I0203 06:19:35.258935 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b5585877d-9z5lp" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:19:35 crc kubenswrapper[4872]: I0203 06:19:35.258985 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b5585877d-9z5lp" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:19:36 crc kubenswrapper[4872]: I0203 06:19:36.132143 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" path="/var/lib/kubelet/pods/c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef/volumes" Feb 03 06:19:37 crc kubenswrapper[4872]: I0203 06:19:37.665490 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:38 crc kubenswrapper[4872]: I0203 06:19:38.088502 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:38 crc kubenswrapper[4872]: I0203 06:19:38.162802 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 06:19:38 crc kubenswrapper[4872]: I0203 06:19:38.207085 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:38 crc kubenswrapper[4872]: I0203 06:19:38.777338 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:39 crc kubenswrapper[4872]: I0203 06:19:39.129289 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="cinder-scheduler" containerID="cri-o://5fc06134d723ce29f8b489386a89af06e0f2a6e0cc8c87d038da2a6c1515e82a" gracePeriod=30 Feb 03 06:19:39 crc kubenswrapper[4872]: I0203 06:19:39.129379 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="probe" containerID="cri-o://f6d2a6bd7de079787794a4674c971df980b5b59b311a120a1e228591de25e7d4" gracePeriod=30 Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.139850 4872 generic.go:334] "Generic (PLEG): container finished" podID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerID="f6d2a6bd7de079787794a4674c971df980b5b59b311a120a1e228591de25e7d4" exitCode=0 Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.139894 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c195a99e-dc2e-4e20-bcf8-8b40a1497680","Type":"ContainerDied","Data":"f6d2a6bd7de079787794a4674c971df980b5b59b311a120a1e228591de25e7d4"} Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.176765 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 03 06:19:40 crc kubenswrapper[4872]: E0203 06:19:40.177148 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" containerName="init" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.177163 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" containerName="init" Feb 03 06:19:40 crc kubenswrapper[4872]: E0203 06:19:40.177181 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" containerName="dnsmasq-dns" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.177187 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" containerName="dnsmasq-dns" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.177338 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3053af6-1e44-4aee-8ea5-0bcbcc6ee8ef" containerName="dnsmasq-dns" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.177854 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.181073 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-w4pkm" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.181532 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.183211 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.197985 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.246878 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsbhf\" (UniqueName: \"kubernetes.io/projected/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-kube-api-access-qsbhf\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.246988 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-openstack-config-secret\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.247019 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-openstack-config\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.247158 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.349184 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.349664 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbhf\" (UniqueName: \"kubernetes.io/projected/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-kube-api-access-qsbhf\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.349801 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-openstack-config-secret\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.349887 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-openstack-config\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.350567 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-openstack-config\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.355169 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.356113 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-openstack-config-secret\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.368206 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbhf\" (UniqueName: \"kubernetes.io/projected/7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67-kube-api-access-qsbhf\") pod \"openstackclient\" (UID: \"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67\") " pod="openstack/openstackclient" Feb 03 06:19:40 crc kubenswrapper[4872]: I0203 06:19:40.497627 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.106878 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8447df9874-g22nb" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.150263 4872 generic.go:334] "Generic (PLEG): container finished" podID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerID="5fc06134d723ce29f8b489386a89af06e0f2a6e0cc8c87d038da2a6c1515e82a" exitCode=0 Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.150299 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c195a99e-dc2e-4e20-bcf8-8b40a1497680","Type":"ContainerDied","Data":"5fc06134d723ce29f8b489386a89af06e0f2a6e0cc8c87d038da2a6c1515e82a"} Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.221505 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b5585877d-9z5lp"] Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.221738 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b5585877d-9z5lp" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api-log" containerID="cri-o://87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a" gracePeriod=30 Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.222123 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b5585877d-9z5lp" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api" containerID="cri-o://076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92" gracePeriod=30 Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.250504 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.668263 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.694530 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-combined-ca-bundle\") pod \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.694963 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdrm4\" (UniqueName: \"kubernetes.io/projected/c195a99e-dc2e-4e20-bcf8-8b40a1497680-kube-api-access-gdrm4\") pod \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.695103 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data\") pod \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.695128 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c195a99e-dc2e-4e20-bcf8-8b40a1497680-etc-machine-id\") pod \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.695186 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data-custom\") pod \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.695270 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-scripts\") pod \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\" (UID: \"c195a99e-dc2e-4e20-bcf8-8b40a1497680\") " Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.696384 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c195a99e-dc2e-4e20-bcf8-8b40a1497680-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c195a99e-dc2e-4e20-bcf8-8b40a1497680" (UID: "c195a99e-dc2e-4e20-bcf8-8b40a1497680"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.710845 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c195a99e-dc2e-4e20-bcf8-8b40a1497680" (UID: "c195a99e-dc2e-4e20-bcf8-8b40a1497680"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.711862 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c195a99e-dc2e-4e20-bcf8-8b40a1497680-kube-api-access-gdrm4" (OuterVolumeSpecName: "kube-api-access-gdrm4") pod "c195a99e-dc2e-4e20-bcf8-8b40a1497680" (UID: "c195a99e-dc2e-4e20-bcf8-8b40a1497680"). InnerVolumeSpecName "kube-api-access-gdrm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.716180 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-scripts" (OuterVolumeSpecName: "scripts") pod "c195a99e-dc2e-4e20-bcf8-8b40a1497680" (UID: "c195a99e-dc2e-4e20-bcf8-8b40a1497680"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.797154 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.797198 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdrm4\" (UniqueName: \"kubernetes.io/projected/c195a99e-dc2e-4e20-bcf8-8b40a1497680-kube-api-access-gdrm4\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.797211 4872 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c195a99e-dc2e-4e20-bcf8-8b40a1497680-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.797220 4872 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.820976 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c195a99e-dc2e-4e20-bcf8-8b40a1497680" (UID: "c195a99e-dc2e-4e20-bcf8-8b40a1497680"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.894921 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data" (OuterVolumeSpecName: "config-data") pod "c195a99e-dc2e-4e20-bcf8-8b40a1497680" (UID: "c195a99e-dc2e-4e20-bcf8-8b40a1497680"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.899260 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:41 crc kubenswrapper[4872]: I0203 06:19:41.899283 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c195a99e-dc2e-4e20-bcf8-8b40a1497680-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.206904 4872 generic.go:334] "Generic (PLEG): container finished" podID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerID="87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a" exitCode=143 Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.206998 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b5585877d-9z5lp" event={"ID":"683c6fc3-d9ec-4e50-9c36-22a929897924","Type":"ContainerDied","Data":"87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a"} Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.212493 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67","Type":"ContainerStarted","Data":"38a1160a1048a70c873009ed03f39d0ed799a829d66617b1b4f225fd8eba73c5"} Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.230314 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c195a99e-dc2e-4e20-bcf8-8b40a1497680","Type":"ContainerDied","Data":"7dfa180d16e0b3903c3c38d86d40fdbb5200f4ce8b6d672931e34bda2fefc091"} Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.230357 4872 scope.go:117] "RemoveContainer" containerID="f6d2a6bd7de079787794a4674c971df980b5b59b311a120a1e228591de25e7d4" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.230745 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.268002 4872 scope.go:117] "RemoveContainer" containerID="5fc06134d723ce29f8b489386a89af06e0f2a6e0cc8c87d038da2a6c1515e82a" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.268136 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.291695 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.331222 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:42 crc kubenswrapper[4872]: E0203 06:19:42.331623 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="probe" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.331641 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="probe" Feb 03 06:19:42 crc kubenswrapper[4872]: E0203 06:19:42.331663 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="cinder-scheduler" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.331670 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="cinder-scheduler" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.331857 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="cinder-scheduler" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.331876 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" containerName="probe" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.336892 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.339334 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.359448 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.517093 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.517464 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.517645 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kph6j\" (UniqueName: \"kubernetes.io/projected/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-kube-api-access-kph6j\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.517949 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.518030 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-scripts\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.518257 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-config-data\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.619254 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kph6j\" (UniqueName: \"kubernetes.io/projected/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-kube-api-access-kph6j\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.619347 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.619380 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-scripts\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.619413 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-config-data\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.619437 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.619463 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.620319 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.627142 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.627456 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-scripts\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.628295 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.630558 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-config-data\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.640619 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kph6j\" (UniqueName: \"kubernetes.io/projected/1e82d1b0-0354-4126-b305-6af3e5fdcb9a-kube-api-access-kph6j\") pod \"cinder-scheduler-0\" (UID: \"1e82d1b0-0354-4126-b305-6af3e5fdcb9a\") " pod="openstack/cinder-scheduler-0" Feb 03 06:19:42 crc kubenswrapper[4872]: I0203 06:19:42.660325 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.154246 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.279679 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e82d1b0-0354-4126-b305-6af3e5fdcb9a","Type":"ContainerStarted","Data":"b68d5bef683a91450ef8a0b2b6146da1e77482fa68324c1a82c0b027a5797d0e"} Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.302002 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f5458fb75-k8gpr"] Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.312662 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.317655 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.318615 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.322147 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.330834 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f5458fb75-k8gpr"] Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.442422 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-internal-tls-certs\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.442483 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-combined-ca-bundle\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.442501 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-config-data\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.442521 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-public-tls-certs\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.442540 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7869dbc8-d72a-47cf-8547-40b91024653f-run-httpd\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.442799 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7869dbc8-d72a-47cf-8547-40b91024653f-etc-swift\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.442820 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7869dbc8-d72a-47cf-8547-40b91024653f-log-httpd\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.442837 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwqr\" (UniqueName: \"kubernetes.io/projected/7869dbc8-d72a-47cf-8547-40b91024653f-kube-api-access-xjwqr\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.545173 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7869dbc8-d72a-47cf-8547-40b91024653f-etc-swift\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.545450 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7869dbc8-d72a-47cf-8547-40b91024653f-log-httpd\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.545471 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjwqr\" (UniqueName: \"kubernetes.io/projected/7869dbc8-d72a-47cf-8547-40b91024653f-kube-api-access-xjwqr\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.545557 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-internal-tls-certs\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.545582 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-combined-ca-bundle\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.545599 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-config-data\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.545621 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-public-tls-certs\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.545643 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7869dbc8-d72a-47cf-8547-40b91024653f-run-httpd\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.546232 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7869dbc8-d72a-47cf-8547-40b91024653f-run-httpd\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.549603 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7869dbc8-d72a-47cf-8547-40b91024653f-log-httpd\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.560585 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-public-tls-certs\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.563025 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-combined-ca-bundle\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.566770 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7869dbc8-d72a-47cf-8547-40b91024653f-etc-swift\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.567305 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-internal-tls-certs\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.575706 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjwqr\" (UniqueName: \"kubernetes.io/projected/7869dbc8-d72a-47cf-8547-40b91024653f-kube-api-access-xjwqr\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.576271 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7869dbc8-d72a-47cf-8547-40b91024653f-config-data\") pod \"swift-proxy-f5458fb75-k8gpr\" (UID: \"7869dbc8-d72a-47cf-8547-40b91024653f\") " pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:43 crc kubenswrapper[4872]: I0203 06:19:43.656998 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:44 crc kubenswrapper[4872]: I0203 06:19:44.135445 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c195a99e-dc2e-4e20-bcf8-8b40a1497680" path="/var/lib/kubelet/pods/c195a99e-dc2e-4e20-bcf8-8b40a1497680/volumes" Feb 03 06:19:44 crc kubenswrapper[4872]: I0203 06:19:44.312092 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e82d1b0-0354-4126-b305-6af3e5fdcb9a","Type":"ContainerStarted","Data":"2be2854165054c01856840cc7f23eb85369e6844ef270bfef4ba0a4a86484ab6"} Feb 03 06:19:44 crc kubenswrapper[4872]: I0203 06:19:44.346221 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f5458fb75-k8gpr"] Feb 03 06:19:44 crc kubenswrapper[4872]: I0203 06:19:44.580875 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="e1031a8c-c3fb-4022-826e-77509f2a2b2f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.227274 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.294409 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/683c6fc3-d9ec-4e50-9c36-22a929897924-logs\") pod \"683c6fc3-d9ec-4e50-9c36-22a929897924\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.294488 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data\") pod \"683c6fc3-d9ec-4e50-9c36-22a929897924\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.294520 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt7sp\" (UniqueName: \"kubernetes.io/projected/683c6fc3-d9ec-4e50-9c36-22a929897924-kube-api-access-jt7sp\") pod \"683c6fc3-d9ec-4e50-9c36-22a929897924\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.294555 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data-custom\") pod \"683c6fc3-d9ec-4e50-9c36-22a929897924\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.294580 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-combined-ca-bundle\") pod \"683c6fc3-d9ec-4e50-9c36-22a929897924\" (UID: \"683c6fc3-d9ec-4e50-9c36-22a929897924\") " Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.295511 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/683c6fc3-d9ec-4e50-9c36-22a929897924-logs" (OuterVolumeSpecName: "logs") pod "683c6fc3-d9ec-4e50-9c36-22a929897924" (UID: "683c6fc3-d9ec-4e50-9c36-22a929897924"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.308034 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683c6fc3-d9ec-4e50-9c36-22a929897924-kube-api-access-jt7sp" (OuterVolumeSpecName: "kube-api-access-jt7sp") pod "683c6fc3-d9ec-4e50-9c36-22a929897924" (UID: "683c6fc3-d9ec-4e50-9c36-22a929897924"). InnerVolumeSpecName "kube-api-access-jt7sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.310798 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "683c6fc3-d9ec-4e50-9c36-22a929897924" (UID: "683c6fc3-d9ec-4e50-9c36-22a929897924"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.339189 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "683c6fc3-d9ec-4e50-9c36-22a929897924" (UID: "683c6fc3-d9ec-4e50-9c36-22a929897924"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.357404 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f5458fb75-k8gpr" event={"ID":"7869dbc8-d72a-47cf-8547-40b91024653f","Type":"ContainerStarted","Data":"482f85d324a4e581b4b2c11cdc1f731b286a02811a65d8fc15e333b5d2a823a3"} Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.357448 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f5458fb75-k8gpr" event={"ID":"7869dbc8-d72a-47cf-8547-40b91024653f","Type":"ContainerStarted","Data":"35805b049a1d3c7e8911b3e27681b6e07ea1274f774e3862edb9508fdcbcaa07"} Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.357461 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f5458fb75-k8gpr" event={"ID":"7869dbc8-d72a-47cf-8547-40b91024653f","Type":"ContainerStarted","Data":"8266ff91df1951a7b0e7d818db19e3ff27be7f887c3119368ce49b0b028660f6"} Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.357864 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.357923 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.363389 4872 generic.go:334] "Generic (PLEG): container finished" podID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerID="076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92" exitCode=0 Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.363522 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b5585877d-9z5lp" event={"ID":"683c6fc3-d9ec-4e50-9c36-22a929897924","Type":"ContainerDied","Data":"076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92"} Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.363550 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b5585877d-9z5lp" event={"ID":"683c6fc3-d9ec-4e50-9c36-22a929897924","Type":"ContainerDied","Data":"d8b17f8b75c269b32179beb6de7d299b46f8b82b0a12ac6ccb64980158f57c08"} Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.363585 4872 scope.go:117] "RemoveContainer" containerID="076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.363752 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b5585877d-9z5lp" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.378445 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e82d1b0-0354-4126-b305-6af3e5fdcb9a","Type":"ContainerStarted","Data":"bd1682f5700e1fe783fc58ce76033983ac93979eac72a0a748c1a75a5e59398d"} Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.391822 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data" (OuterVolumeSpecName: "config-data") pod "683c6fc3-d9ec-4e50-9c36-22a929897924" (UID: "683c6fc3-d9ec-4e50-9c36-22a929897924"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.399966 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/683c6fc3-d9ec-4e50-9c36-22a929897924-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.401463 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.401528 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt7sp\" (UniqueName: \"kubernetes.io/projected/683c6fc3-d9ec-4e50-9c36-22a929897924-kube-api-access-jt7sp\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.401582 4872 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.401634 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c6fc3-d9ec-4e50-9c36-22a929897924-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.401943 4872 scope.go:117] "RemoveContainer" containerID="87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.418729 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.418706235 podStartE2EDuration="3.418706235s" podCreationTimestamp="2026-02-03 06:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:45.408030048 +0000 UTC m=+1155.990721462" watchObservedRunningTime="2026-02-03 06:19:45.418706235 +0000 UTC m=+1156.001397659" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.421487 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f5458fb75-k8gpr" podStartSLOduration=2.421477382 podStartE2EDuration="2.421477382s" podCreationTimestamp="2026-02-03 06:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:45.378800213 +0000 UTC m=+1155.961491627" watchObservedRunningTime="2026-02-03 06:19:45.421477382 +0000 UTC m=+1156.004168796" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.429810 4872 scope.go:117] "RemoveContainer" containerID="076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92" Feb 03 06:19:45 crc kubenswrapper[4872]: E0203 06:19:45.430332 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92\": container with ID starting with 076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92 not found: ID does not exist" containerID="076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.430372 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92"} err="failed to get container status \"076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92\": rpc error: code = NotFound desc = could not find container \"076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92\": container with ID starting with 076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92 not found: ID does not exist" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.430398 4872 scope.go:117] "RemoveContainer" containerID="87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a" Feb 03 06:19:45 crc kubenswrapper[4872]: E0203 06:19:45.431050 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a\": container with ID starting with 87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a not found: ID does not exist" containerID="87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.431085 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a"} err="failed to get container status \"87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a\": rpc error: code = NotFound desc = could not find container \"87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a\": container with ID starting with 87fce41d2672adfd27905432d0f5e198cb624e6a911704e2d1b8e94a529f264a not found: ID does not exist" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.580833 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e1031a8c-c3fb-4022-826e-77509f2a2b2f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.704310 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b5585877d-9z5lp"] Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.721843 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b5585877d-9z5lp"] Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.955556 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.955831 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="ceilometer-central-agent" containerID="cri-o://0a6156556717d23db672daf90649eef056db7d825d186e86e07aab6f3f011879" gracePeriod=30 Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.956118 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="proxy-httpd" containerID="cri-o://6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48" gracePeriod=30 Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.956158 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="sg-core" containerID="cri-o://9090aacdcc4f851fd277b0557a76addd5ba0c20342b56fd3815356336d263d9e" gracePeriod=30 Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.956204 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="ceilometer-notification-agent" containerID="cri-o://d8b986f1dc2139eae0f150ec499692542c00c7a32fc9258d14073ee0ab04f552" gracePeriod=30 Feb 03 06:19:45 crc kubenswrapper[4872]: I0203 06:19:45.969776 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": EOF" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.136565 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" path="/var/lib/kubelet/pods/683c6fc3-d9ec-4e50-9c36-22a929897924/volumes" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.313785 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-njmck"] Feb 03 06:19:46 crc kubenswrapper[4872]: E0203 06:19:46.314107 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.314119 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api" Feb 03 06:19:46 crc kubenswrapper[4872]: E0203 06:19:46.314140 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api-log" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.314145 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api-log" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.314289 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.314304 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api-log" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.315003 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.331224 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-njmck"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.425556 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5fbv\" (UniqueName: \"kubernetes.io/projected/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-kube-api-access-n5fbv\") pod \"nova-api-db-create-njmck\" (UID: \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\") " pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.425635 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-operator-scripts\") pod \"nova-api-db-create-njmck\" (UID: \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\") " pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.458285 4872 generic.go:334] "Generic (PLEG): container finished" podID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerID="6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48" exitCode=0 Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.458367 4872 generic.go:334] "Generic (PLEG): container finished" podID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerID="9090aacdcc4f851fd277b0557a76addd5ba0c20342b56fd3815356336d263d9e" exitCode=2 Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.458456 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerDied","Data":"6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48"} Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.458485 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerDied","Data":"9090aacdcc4f851fd277b0557a76addd5ba0c20342b56fd3815356336d263d9e"} Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.459472 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-89b4-account-create-update-wbhm5"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.465377 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.487794 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.494428 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89b4-account-create-update-wbhm5"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.527664 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-operator-scripts\") pod \"nova-api-db-create-njmck\" (UID: \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\") " pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.527994 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwkkh\" (UniqueName: \"kubernetes.io/projected/ac0da642-81fc-4cf5-9933-210cf0f17ba9-kube-api-access-vwkkh\") pod \"nova-api-89b4-account-create-update-wbhm5\" (UID: \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\") " pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.528105 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0da642-81fc-4cf5-9933-210cf0f17ba9-operator-scripts\") pod \"nova-api-89b4-account-create-update-wbhm5\" (UID: \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\") " pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.528198 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5fbv\" (UniqueName: \"kubernetes.io/projected/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-kube-api-access-n5fbv\") pod \"nova-api-db-create-njmck\" (UID: \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\") " pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.529852 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-operator-scripts\") pod \"nova-api-db-create-njmck\" (UID: \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\") " pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.544795 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dzr2t"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.546034 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.553296 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dzr2t"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.561541 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5fbv\" (UniqueName: \"kubernetes.io/projected/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-kube-api-access-n5fbv\") pod \"nova-api-db-create-njmck\" (UID: \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\") " pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.630873 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0da642-81fc-4cf5-9933-210cf0f17ba9-operator-scripts\") pod \"nova-api-89b4-account-create-update-wbhm5\" (UID: \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\") " pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.631018 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7zhm\" (UniqueName: \"kubernetes.io/projected/49628158-8aeb-4512-9585-91db75925666-kube-api-access-n7zhm\") pod \"nova-cell0-db-create-dzr2t\" (UID: \"49628158-8aeb-4512-9585-91db75925666\") " pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.631063 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49628158-8aeb-4512-9585-91db75925666-operator-scripts\") pod \"nova-cell0-db-create-dzr2t\" (UID: \"49628158-8aeb-4512-9585-91db75925666\") " pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.631126 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwkkh\" (UniqueName: \"kubernetes.io/projected/ac0da642-81fc-4cf5-9933-210cf0f17ba9-kube-api-access-vwkkh\") pod \"nova-api-89b4-account-create-update-wbhm5\" (UID: \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\") " pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.631444 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0da642-81fc-4cf5-9933-210cf0f17ba9-operator-scripts\") pod \"nova-api-89b4-account-create-update-wbhm5\" (UID: \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\") " pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.636513 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-c9xvp"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.637783 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.645645 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.653244 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-669f-account-create-update-rk72d"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.654630 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.655899 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwkkh\" (UniqueName: \"kubernetes.io/projected/ac0da642-81fc-4cf5-9933-210cf0f17ba9-kube-api-access-vwkkh\") pod \"nova-api-89b4-account-create-update-wbhm5\" (UID: \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\") " pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.664485 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.665034 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c9xvp"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.690072 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-669f-account-create-update-rk72d"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.736599 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvmr\" (UniqueName: \"kubernetes.io/projected/5f8ac6be-5bf4-4866-b8c7-073a00d94310-kube-api-access-kqvmr\") pod \"nova-cell1-db-create-c9xvp\" (UID: \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\") " pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.737049 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7zhm\" (UniqueName: \"kubernetes.io/projected/49628158-8aeb-4512-9585-91db75925666-kube-api-access-n7zhm\") pod \"nova-cell0-db-create-dzr2t\" (UID: \"49628158-8aeb-4512-9585-91db75925666\") " pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.737110 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49628158-8aeb-4512-9585-91db75925666-operator-scripts\") pod \"nova-cell0-db-create-dzr2t\" (UID: \"49628158-8aeb-4512-9585-91db75925666\") " pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.737190 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spn9r\" (UniqueName: \"kubernetes.io/projected/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-kube-api-access-spn9r\") pod \"nova-cell0-669f-account-create-update-rk72d\" (UID: \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\") " pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.737259 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8ac6be-5bf4-4866-b8c7-073a00d94310-operator-scripts\") pod \"nova-cell1-db-create-c9xvp\" (UID: \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\") " pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.737357 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-operator-scripts\") pod \"nova-cell0-669f-account-create-update-rk72d\" (UID: \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\") " pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.738653 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49628158-8aeb-4512-9585-91db75925666-operator-scripts\") pod \"nova-cell0-db-create-dzr2t\" (UID: \"49628158-8aeb-4512-9585-91db75925666\") " pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.762461 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7zhm\" (UniqueName: \"kubernetes.io/projected/49628158-8aeb-4512-9585-91db75925666-kube-api-access-n7zhm\") pod \"nova-cell0-db-create-dzr2t\" (UID: \"49628158-8aeb-4512-9585-91db75925666\") " pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.842626 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spn9r\" (UniqueName: \"kubernetes.io/projected/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-kube-api-access-spn9r\") pod \"nova-cell0-669f-account-create-update-rk72d\" (UID: \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\") " pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.842715 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8ac6be-5bf4-4866-b8c7-073a00d94310-operator-scripts\") pod \"nova-cell1-db-create-c9xvp\" (UID: \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\") " pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.842779 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-operator-scripts\") pod \"nova-cell0-669f-account-create-update-rk72d\" (UID: \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\") " pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.842805 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvmr\" (UniqueName: \"kubernetes.io/projected/5f8ac6be-5bf4-4866-b8c7-073a00d94310-kube-api-access-kqvmr\") pod \"nova-cell1-db-create-c9xvp\" (UID: \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\") " pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.843854 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8ac6be-5bf4-4866-b8c7-073a00d94310-operator-scripts\") pod \"nova-cell1-db-create-c9xvp\" (UID: \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\") " pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.844573 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.858772 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-operator-scripts\") pod \"nova-cell0-669f-account-create-update-rk72d\" (UID: \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\") " pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.876052 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-08b5-account-create-update-mfr7x"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.877226 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.882059 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.896377 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvmr\" (UniqueName: \"kubernetes.io/projected/5f8ac6be-5bf4-4866-b8c7-073a00d94310-kube-api-access-kqvmr\") pod \"nova-cell1-db-create-c9xvp\" (UID: \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\") " pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.899175 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spn9r\" (UniqueName: \"kubernetes.io/projected/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-kube-api-access-spn9r\") pod \"nova-cell0-669f-account-create-update-rk72d\" (UID: \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\") " pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.930093 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.931225 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-08b5-account-create-update-mfr7x"] Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.946214 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff6dd5da-4798-4300-8752-0eafdd05cf40-operator-scripts\") pod \"nova-cell1-08b5-account-create-update-mfr7x\" (UID: \"ff6dd5da-4798-4300-8752-0eafdd05cf40\") " pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.946309 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xz82\" (UniqueName: \"kubernetes.io/projected/ff6dd5da-4798-4300-8752-0eafdd05cf40-kube-api-access-4xz82\") pod \"nova-cell1-08b5-account-create-update-mfr7x\" (UID: \"ff6dd5da-4798-4300-8752-0eafdd05cf40\") " pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:46 crc kubenswrapper[4872]: I0203 06:19:46.978883 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.051058 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xz82\" (UniqueName: \"kubernetes.io/projected/ff6dd5da-4798-4300-8752-0eafdd05cf40-kube-api-access-4xz82\") pod \"nova-cell1-08b5-account-create-update-mfr7x\" (UID: \"ff6dd5da-4798-4300-8752-0eafdd05cf40\") " pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.051248 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff6dd5da-4798-4300-8752-0eafdd05cf40-operator-scripts\") pod \"nova-cell1-08b5-account-create-update-mfr7x\" (UID: \"ff6dd5da-4798-4300-8752-0eafdd05cf40\") " pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.053041 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff6dd5da-4798-4300-8752-0eafdd05cf40-operator-scripts\") pod \"nova-cell1-08b5-account-create-update-mfr7x\" (UID: \"ff6dd5da-4798-4300-8752-0eafdd05cf40\") " pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.068151 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xz82\" (UniqueName: \"kubernetes.io/projected/ff6dd5da-4798-4300-8752-0eafdd05cf40-kube-api-access-4xz82\") pod \"nova-cell1-08b5-account-create-update-mfr7x\" (UID: \"ff6dd5da-4798-4300-8752-0eafdd05cf40\") " pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.078059 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.239406 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-njmck"] Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.306037 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.428720 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dzr2t"] Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.535913 4872 generic.go:334] "Generic (PLEG): container finished" podID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerID="0a6156556717d23db672daf90649eef056db7d825d186e86e07aab6f3f011879" exitCode=0 Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.536001 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerDied","Data":"0a6156556717d23db672daf90649eef056db7d825d186e86e07aab6f3f011879"} Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.561191 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-njmck" event={"ID":"5e43edca-f702-4b9f-b8be-f95bda7b7a1e","Type":"ContainerStarted","Data":"8743cd55db071468d2353ff0bc947905113c3150fffafdb3da5add1ec4a49ae8"} Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.664795 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.675602 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89b4-account-create-update-wbhm5"] Feb 03 06:19:47 crc kubenswrapper[4872]: I0203 06:19:47.857861 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-669f-account-create-update-rk72d"] Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.040607 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c9xvp"] Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.251615 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-08b5-account-create-update-mfr7x"] Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.576989 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c9xvp" event={"ID":"5f8ac6be-5bf4-4866-b8c7-073a00d94310","Type":"ContainerStarted","Data":"c668d789f1296f33d4c994a4d99a2d04a8750899e0f0277d5ca7655f19b057a3"} Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.581036 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89b4-account-create-update-wbhm5" event={"ID":"ac0da642-81fc-4cf5-9933-210cf0f17ba9","Type":"ContainerStarted","Data":"cdec3e1aaf264950ef9968c84c52460a5c670142cc7dae8ff6c3fd185f9f1540"} Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.584452 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-669f-account-create-update-rk72d" event={"ID":"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f","Type":"ContainerStarted","Data":"e80115dd71199c192edfa4af7cb2c4058ef56f82a7478e6591db50e57013b5b8"} Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.586202 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-njmck" event={"ID":"5e43edca-f702-4b9f-b8be-f95bda7b7a1e","Type":"ContainerStarted","Data":"e047b8817d33a644cf3b0bcbeda1ba1ab9570751e85a39efff2bfd5b043db152"} Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.590551 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dzr2t" event={"ID":"49628158-8aeb-4512-9585-91db75925666","Type":"ContainerStarted","Data":"bb76e23e546a526cf389bb883de5831f95449710f17678077ca760843b473a79"} Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.590594 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dzr2t" event={"ID":"49628158-8aeb-4512-9585-91db75925666","Type":"ContainerStarted","Data":"f35c9699b735718d4397ef40543782a6d89362907881b7305632f5ff6998fbbf"} Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.593141 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" event={"ID":"ff6dd5da-4798-4300-8752-0eafdd05cf40","Type":"ContainerStarted","Data":"945ac3ffde33803a672a415575ad472bc9eb75fac62bd8816fa71738b784ee6c"} Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.613230 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-njmck" podStartSLOduration=2.6132134259999997 podStartE2EDuration="2.613213426s" podCreationTimestamp="2026-02-03 06:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:48.604522046 +0000 UTC m=+1159.187213460" watchObservedRunningTime="2026-02-03 06:19:48.613213426 +0000 UTC m=+1159.195904830" Feb 03 06:19:48 crc kubenswrapper[4872]: I0203 06:19:48.649215 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-dzr2t" podStartSLOduration=2.649197614 podStartE2EDuration="2.649197614s" podCreationTimestamp="2026-02-03 06:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:19:48.642762988 +0000 UTC m=+1159.225454402" watchObservedRunningTime="2026-02-03 06:19:48.649197614 +0000 UTC m=+1159.231889028" Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.603035 4872 generic.go:334] "Generic (PLEG): container finished" podID="ac0da642-81fc-4cf5-9933-210cf0f17ba9" containerID="c3844b84b7b30f14131073b83e1edff28344aa6bdbef9a48f80b5fd33cc726ae" exitCode=0 Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.603118 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89b4-account-create-update-wbhm5" event={"ID":"ac0da642-81fc-4cf5-9933-210cf0f17ba9","Type":"ContainerDied","Data":"c3844b84b7b30f14131073b83e1edff28344aa6bdbef9a48f80b5fd33cc726ae"} Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.605149 4872 generic.go:334] "Generic (PLEG): container finished" podID="5f8ac6be-5bf4-4866-b8c7-073a00d94310" containerID="ded2277f618d56eebb5243ee02a00c08ae5cec0200584aae48f45922633f19ec" exitCode=0 Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.605217 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c9xvp" event={"ID":"5f8ac6be-5bf4-4866-b8c7-073a00d94310","Type":"ContainerDied","Data":"ded2277f618d56eebb5243ee02a00c08ae5cec0200584aae48f45922633f19ec"} Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.607822 4872 generic.go:334] "Generic (PLEG): container finished" podID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerID="d8b986f1dc2139eae0f150ec499692542c00c7a32fc9258d14073ee0ab04f552" exitCode=0 Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.607857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerDied","Data":"d8b986f1dc2139eae0f150ec499692542c00c7a32fc9258d14073ee0ab04f552"} Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.612656 4872 generic.go:334] "Generic (PLEG): container finished" podID="e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f" containerID="714203209d705a9a92725bbec2f656d9cc9a833603fdc025736a41f428fc23ca" exitCode=0 Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.612720 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-669f-account-create-update-rk72d" event={"ID":"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f","Type":"ContainerDied","Data":"714203209d705a9a92725bbec2f656d9cc9a833603fdc025736a41f428fc23ca"} Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.614595 4872 generic.go:334] "Generic (PLEG): container finished" podID="5e43edca-f702-4b9f-b8be-f95bda7b7a1e" containerID="e047b8817d33a644cf3b0bcbeda1ba1ab9570751e85a39efff2bfd5b043db152" exitCode=0 Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.614648 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-njmck" event={"ID":"5e43edca-f702-4b9f-b8be-f95bda7b7a1e","Type":"ContainerDied","Data":"e047b8817d33a644cf3b0bcbeda1ba1ab9570751e85a39efff2bfd5b043db152"} Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.620772 4872 generic.go:334] "Generic (PLEG): container finished" podID="49628158-8aeb-4512-9585-91db75925666" containerID="bb76e23e546a526cf389bb883de5831f95449710f17678077ca760843b473a79" exitCode=0 Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.620849 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dzr2t" event={"ID":"49628158-8aeb-4512-9585-91db75925666","Type":"ContainerDied","Data":"bb76e23e546a526cf389bb883de5831f95449710f17678077ca760843b473a79"} Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.642603 4872 generic.go:334] "Generic (PLEG): container finished" podID="ff6dd5da-4798-4300-8752-0eafdd05cf40" containerID="30aaa98d06b743c6f429cc642a1dd4f7e0e22515b9004bc1818661d459891112" exitCode=0 Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.642656 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" event={"ID":"ff6dd5da-4798-4300-8752-0eafdd05cf40","Type":"ContainerDied","Data":"30aaa98d06b743c6f429cc642a1dd4f7e0e22515b9004bc1818661d459891112"} Feb 03 06:19:49 crc kubenswrapper[4872]: I0203 06:19:49.915238 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.026083 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-run-httpd\") pod \"d38f107b-557f-4b56-89b4-5ecf9f235133\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.026134 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk57p\" (UniqueName: \"kubernetes.io/projected/d38f107b-557f-4b56-89b4-5ecf9f235133-kube-api-access-vk57p\") pod \"d38f107b-557f-4b56-89b4-5ecf9f235133\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.026152 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-config-data\") pod \"d38f107b-557f-4b56-89b4-5ecf9f235133\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.026222 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-scripts\") pod \"d38f107b-557f-4b56-89b4-5ecf9f235133\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.026275 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-log-httpd\") pod \"d38f107b-557f-4b56-89b4-5ecf9f235133\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.026323 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-sg-core-conf-yaml\") pod \"d38f107b-557f-4b56-89b4-5ecf9f235133\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.026376 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-combined-ca-bundle\") pod \"d38f107b-557f-4b56-89b4-5ecf9f235133\" (UID: \"d38f107b-557f-4b56-89b4-5ecf9f235133\") " Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.029498 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d38f107b-557f-4b56-89b4-5ecf9f235133" (UID: "d38f107b-557f-4b56-89b4-5ecf9f235133"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.029605 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d38f107b-557f-4b56-89b4-5ecf9f235133" (UID: "d38f107b-557f-4b56-89b4-5ecf9f235133"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.033421 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-scripts" (OuterVolumeSpecName: "scripts") pod "d38f107b-557f-4b56-89b4-5ecf9f235133" (UID: "d38f107b-557f-4b56-89b4-5ecf9f235133"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.042085 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d38f107b-557f-4b56-89b4-5ecf9f235133-kube-api-access-vk57p" (OuterVolumeSpecName: "kube-api-access-vk57p") pod "d38f107b-557f-4b56-89b4-5ecf9f235133" (UID: "d38f107b-557f-4b56-89b4-5ecf9f235133"). InnerVolumeSpecName "kube-api-access-vk57p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.096537 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d38f107b-557f-4b56-89b4-5ecf9f235133" (UID: "d38f107b-557f-4b56-89b4-5ecf9f235133"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.128376 4872 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.128406 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk57p\" (UniqueName: \"kubernetes.io/projected/d38f107b-557f-4b56-89b4-5ecf9f235133-kube-api-access-vk57p\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.128416 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.128426 4872 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d38f107b-557f-4b56-89b4-5ecf9f235133-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.128435 4872 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.171306 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b5585877d-9z5lp" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.171386 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b5585877d-9z5lp" podUID="683c6fc3-d9ec-4e50-9c36-22a929897924" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.204910 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-config-data" (OuterVolumeSpecName: "config-data") pod "d38f107b-557f-4b56-89b4-5ecf9f235133" (UID: "d38f107b-557f-4b56-89b4-5ecf9f235133"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.230708 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.232257 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.236014 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d38f107b-557f-4b56-89b4-5ecf9f235133" (UID: "d38f107b-557f-4b56-89b4-5ecf9f235133"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.334274 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d38f107b-557f-4b56-89b4-5ecf9f235133-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.526611 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.661120 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.668830 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d38f107b-557f-4b56-89b4-5ecf9f235133","Type":"ContainerDied","Data":"e94e3f35161c038082cd16179e1dbcda02f60da3ba5e8b9bbb30ff6a1d8048b5"} Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.668872 4872 scope.go:117] "RemoveContainer" containerID="6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.777173 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.778863 4872 scope.go:117] "RemoveContainer" containerID="9090aacdcc4f851fd277b0557a76addd5ba0c20342b56fd3815356336d263d9e" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.856145 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.886857 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:50 crc kubenswrapper[4872]: E0203 06:19:50.887257 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="ceilometer-notification-agent" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.887276 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="ceilometer-notification-agent" Feb 03 06:19:50 crc kubenswrapper[4872]: E0203 06:19:50.887299 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="sg-core" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.887305 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="sg-core" Feb 03 06:19:50 crc kubenswrapper[4872]: E0203 06:19:50.887325 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="proxy-httpd" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.887331 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="proxy-httpd" Feb 03 06:19:50 crc kubenswrapper[4872]: E0203 06:19:50.887348 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="ceilometer-central-agent" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.887354 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="ceilometer-central-agent" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.887528 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="sg-core" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.887542 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="proxy-httpd" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.887552 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="ceilometer-central-agent" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.887561 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" containerName="ceilometer-notification-agent" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.890277 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.893520 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.893718 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.954329 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.967165 4872 scope.go:117] "RemoveContainer" containerID="d8b986f1dc2139eae0f150ec499692542c00c7a32fc9258d14073ee0ab04f552" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.967602 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.967656 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.967695 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-scripts\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.967787 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-log-httpd\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.967802 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-config-data\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.967831 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-run-httpd\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:50 crc kubenswrapper[4872]: I0203 06:19:50.967847 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnscq\" (UniqueName: \"kubernetes.io/projected/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-kube-api-access-gnscq\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.071805 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-log-httpd\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.072139 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-config-data\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.072184 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-run-httpd\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.072204 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnscq\" (UniqueName: \"kubernetes.io/projected/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-kube-api-access-gnscq\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.072256 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.072282 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.072302 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-scripts\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.075718 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-run-httpd\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.076119 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-log-httpd\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.122290 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-scripts\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.122869 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.123198 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-config-data\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.126697 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnscq\" (UniqueName: \"kubernetes.io/projected/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-kube-api-access-gnscq\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.129810 4872 scope.go:117] "RemoveContainer" containerID="0a6156556717d23db672daf90649eef056db7d825d186e86e07aab6f3f011879" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.137273 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.206215 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:19:51 crc kubenswrapper[4872]: W0203 06:19:51.296220 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38f107b_557f_4b56_89b4_5ecf9f235133.slice/crio-6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48.scope WatchSource:0}: Error finding container 6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48: Status 404 returned error can't find the container with id 6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48 Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.380143 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.491930 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8ac6be-5bf4-4866-b8c7-073a00d94310-operator-scripts\") pod \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\" (UID: \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\") " Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.492296 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqvmr\" (UniqueName: \"kubernetes.io/projected/5f8ac6be-5bf4-4866-b8c7-073a00d94310-kube-api-access-kqvmr\") pod \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\" (UID: \"5f8ac6be-5bf4-4866-b8c7-073a00d94310\") " Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.493947 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8ac6be-5bf4-4866-b8c7-073a00d94310-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f8ac6be-5bf4-4866-b8c7-073a00d94310" (UID: "5f8ac6be-5bf4-4866-b8c7-073a00d94310"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.499154 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8ac6be-5bf4-4866-b8c7-073a00d94310-kube-api-access-kqvmr" (OuterVolumeSpecName: "kube-api-access-kqvmr") pod "5f8ac6be-5bf4-4866-b8c7-073a00d94310" (UID: "5f8ac6be-5bf4-4866-b8c7-073a00d94310"). InnerVolumeSpecName "kube-api-access-kqvmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.595025 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f8ac6be-5bf4-4866-b8c7-073a00d94310-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.595064 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqvmr\" (UniqueName: \"kubernetes.io/projected/5f8ac6be-5bf4-4866-b8c7-073a00d94310-kube-api-access-kqvmr\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.636150 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:51 crc kubenswrapper[4872]: E0203 06:19:51.689051 4872 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod683c6fc3_d9ec_4e50_9c36_22a929897924.slice/crio-076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38f107b_557f_4b56_89b4_5ecf9f235133.slice/crio-9090aacdcc4f851fd277b0557a76addd5ba0c20342b56fd3815356336d263d9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod683c6fc3_d9ec_4e50_9c36_22a929897924.slice/crio-conmon-076d6c7436ce3b0b6f07cb396c3ed943918683769dc54bd0f79ec7543907ed92.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38f107b_557f_4b56_89b4_5ecf9f235133.slice/crio-conmon-6503fc774b12e313803daed1def517f6b106113415382873afa7eff74d2a5f48.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod683c6fc3_d9ec_4e50_9c36_22a929897924.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf475ab66_31e6_46da_ad2e_8e8279e33b68.slice/crio-def8661c05bb4e9553cd79780f48ef13e8421f47cd09acd384a5bfeb8d63e1a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod683c6fc3_d9ec_4e50_9c36_22a929897924.slice/crio-d8b17f8b75c269b32179beb6de7d299b46f8b82b0a12ac6ccb64980158f57c08\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea66403d_a189_495c_9067_18571e929874.slice/crio-conmon-534ee65dd0168b081759c8c62509c1a653cf9038605da25240365de5b1fdcb4c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38f107b_557f_4b56_89b4_5ecf9f235133.slice/crio-e94e3f35161c038082cd16179e1dbcda02f60da3ba5e8b9bbb30ff6a1d8048b5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38f107b_557f_4b56_89b4_5ecf9f235133.slice/crio-d8b986f1dc2139eae0f150ec499692542c00c7a32fc9258d14073ee0ab04f552.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38f107b_557f_4b56_89b4_5ecf9f235133.slice/crio-conmon-d8b986f1dc2139eae0f150ec499692542c00c7a32fc9258d14073ee0ab04f552.scope\": RecentStats: unable to find data in memory cache]" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.695762 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7zhm\" (UniqueName: \"kubernetes.io/projected/49628158-8aeb-4512-9585-91db75925666-kube-api-access-n7zhm\") pod \"49628158-8aeb-4512-9585-91db75925666\" (UID: \"49628158-8aeb-4512-9585-91db75925666\") " Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.698251 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49628158-8aeb-4512-9585-91db75925666-operator-scripts\") pod \"49628158-8aeb-4512-9585-91db75925666\" (UID: \"49628158-8aeb-4512-9585-91db75925666\") " Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.699599 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49628158-8aeb-4512-9585-91db75925666-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49628158-8aeb-4512-9585-91db75925666" (UID: "49628158-8aeb-4512-9585-91db75925666"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.723738 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49628158-8aeb-4512-9585-91db75925666-kube-api-access-n7zhm" (OuterVolumeSpecName: "kube-api-access-n7zhm") pod "49628158-8aeb-4512-9585-91db75925666" (UID: "49628158-8aeb-4512-9585-91db75925666"). InnerVolumeSpecName "kube-api-access-n7zhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.732973 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.761105 4872 generic.go:334] "Generic (PLEG): container finished" podID="f475ab66-31e6-46da-ad2e-8e8279e33b68" containerID="def8661c05bb4e9553cd79780f48ef13e8421f47cd09acd384a5bfeb8d63e1a4" exitCode=137 Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.761244 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57dc94599b-bvf7j" event={"ID":"f475ab66-31e6-46da-ad2e-8e8279e33b68","Type":"ContainerDied","Data":"def8661c05bb4e9553cd79780f48ef13e8421f47cd09acd384a5bfeb8d63e1a4"} Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.761272 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57dc94599b-bvf7j" event={"ID":"f475ab66-31e6-46da-ad2e-8e8279e33b68","Type":"ContainerStarted","Data":"1a4331e90d5c454da1370878f2355f9e2ba2836da29536c2223b6bc914ee9654"} Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.777925 4872 generic.go:334] "Generic (PLEG): container finished" podID="ea66403d-a189-495c-9067-18571e929874" containerID="534ee65dd0168b081759c8c62509c1a653cf9038605da25240365de5b1fdcb4c" exitCode=137 Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.778024 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b48d58c48-rvvcb" event={"ID":"ea66403d-a189-495c-9067-18571e929874","Type":"ContainerDied","Data":"534ee65dd0168b081759c8c62509c1a653cf9038605da25240365de5b1fdcb4c"} Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.787413 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dzr2t" event={"ID":"49628158-8aeb-4512-9585-91db75925666","Type":"ContainerDied","Data":"f35c9699b735718d4397ef40543782a6d89362907881b7305632f5ff6998fbbf"} Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.787455 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f35c9699b735718d4397ef40543782a6d89362907881b7305632f5ff6998fbbf" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.787524 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dzr2t" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.810492 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7zhm\" (UniqueName: \"kubernetes.io/projected/49628158-8aeb-4512-9585-91db75925666-kube-api-access-n7zhm\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.810516 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49628158-8aeb-4512-9585-91db75925666-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.814865 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c9xvp" event={"ID":"5f8ac6be-5bf4-4866-b8c7-073a00d94310","Type":"ContainerDied","Data":"c668d789f1296f33d4c994a4d99a2d04a8750899e0f0277d5ca7655f19b057a3"} Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.814907 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c668d789f1296f33d4c994a4d99a2d04a8750899e0f0277d5ca7655f19b057a3" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.814930 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c9xvp" Feb 03 06:19:51 crc kubenswrapper[4872]: I0203 06:19:51.882946 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f7ccbfc56-8bmzq" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.174338 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d38f107b-557f-4b56-89b4-5ecf9f235133" path="/var/lib/kubelet/pods/d38f107b-557f-4b56-89b4-5ecf9f235133/volumes" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.189936 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.208623 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.208754 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.230637 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.238289 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwkkh\" (UniqueName: \"kubernetes.io/projected/ac0da642-81fc-4cf5-9933-210cf0f17ba9-kube-api-access-vwkkh\") pod \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\" (UID: \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\") " Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.238380 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0da642-81fc-4cf5-9933-210cf0f17ba9-operator-scripts\") pod \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\" (UID: \"ac0da642-81fc-4cf5-9933-210cf0f17ba9\") " Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.241459 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0da642-81fc-4cf5-9933-210cf0f17ba9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac0da642-81fc-4cf5-9933-210cf0f17ba9" (UID: "ac0da642-81fc-4cf5-9933-210cf0f17ba9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.245990 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0da642-81fc-4cf5-9933-210cf0f17ba9-kube-api-access-vwkkh" (OuterVolumeSpecName: "kube-api-access-vwkkh") pod "ac0da642-81fc-4cf5-9933-210cf0f17ba9" (UID: "ac0da642-81fc-4cf5-9933-210cf0f17ba9"). InnerVolumeSpecName "kube-api-access-vwkkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.258830 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.339964 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff6dd5da-4798-4300-8752-0eafdd05cf40-operator-scripts\") pod \"ff6dd5da-4798-4300-8752-0eafdd05cf40\" (UID: \"ff6dd5da-4798-4300-8752-0eafdd05cf40\") " Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.340125 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-operator-scripts\") pod \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\" (UID: \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\") " Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.340269 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-operator-scripts\") pod \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\" (UID: \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\") " Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.340440 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5fbv\" (UniqueName: \"kubernetes.io/projected/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-kube-api-access-n5fbv\") pod \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\" (UID: \"5e43edca-f702-4b9f-b8be-f95bda7b7a1e\") " Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.340535 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xz82\" (UniqueName: \"kubernetes.io/projected/ff6dd5da-4798-4300-8752-0eafdd05cf40-kube-api-access-4xz82\") pod \"ff6dd5da-4798-4300-8752-0eafdd05cf40\" (UID: \"ff6dd5da-4798-4300-8752-0eafdd05cf40\") " Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.340747 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spn9r\" (UniqueName: \"kubernetes.io/projected/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-kube-api-access-spn9r\") pod \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\" (UID: \"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f\") " Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.341141 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e43edca-f702-4b9f-b8be-f95bda7b7a1e" (UID: "5e43edca-f702-4b9f-b8be-f95bda7b7a1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.341255 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff6dd5da-4798-4300-8752-0eafdd05cf40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff6dd5da-4798-4300-8752-0eafdd05cf40" (UID: "ff6dd5da-4798-4300-8752-0eafdd05cf40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.341528 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f" (UID: "e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.341947 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwkkh\" (UniqueName: \"kubernetes.io/projected/ac0da642-81fc-4cf5-9933-210cf0f17ba9-kube-api-access-vwkkh\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.341986 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac0da642-81fc-4cf5-9933-210cf0f17ba9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.341999 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff6dd5da-4798-4300-8752-0eafdd05cf40-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.342010 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.342021 4872 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.346363 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-kube-api-access-n5fbv" (OuterVolumeSpecName: "kube-api-access-n5fbv") pod "5e43edca-f702-4b9f-b8be-f95bda7b7a1e" (UID: "5e43edca-f702-4b9f-b8be-f95bda7b7a1e"). InnerVolumeSpecName "kube-api-access-n5fbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.347216 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-kube-api-access-spn9r" (OuterVolumeSpecName: "kube-api-access-spn9r") pod "e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f" (UID: "e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f"). InnerVolumeSpecName "kube-api-access-spn9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.348301 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff6dd5da-4798-4300-8752-0eafdd05cf40-kube-api-access-4xz82" (OuterVolumeSpecName: "kube-api-access-4xz82") pod "ff6dd5da-4798-4300-8752-0eafdd05cf40" (UID: "ff6dd5da-4798-4300-8752-0eafdd05cf40"). InnerVolumeSpecName "kube-api-access-4xz82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.443284 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5fbv\" (UniqueName: \"kubernetes.io/projected/5e43edca-f702-4b9f-b8be-f95bda7b7a1e-kube-api-access-n5fbv\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.443314 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xz82\" (UniqueName: \"kubernetes.io/projected/ff6dd5da-4798-4300-8752-0eafdd05cf40-kube-api-access-4xz82\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.443323 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spn9r\" (UniqueName: \"kubernetes.io/projected/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f-kube-api-access-spn9r\") on node \"crc\" DevicePath \"\"" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.826170 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerStarted","Data":"6ee599bfdbef59f37f08e4cb0f6bd76a0a05dbf4b497d25d7093965e53517418"} Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.829725 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" event={"ID":"ff6dd5da-4798-4300-8752-0eafdd05cf40","Type":"ContainerDied","Data":"945ac3ffde33803a672a415575ad472bc9eb75fac62bd8816fa71738b784ee6c"} Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.829753 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="945ac3ffde33803a672a415575ad472bc9eb75fac62bd8816fa71738b784ee6c" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.829799 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-08b5-account-create-update-mfr7x" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.847278 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89b4-account-create-update-wbhm5" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.847278 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89b4-account-create-update-wbhm5" event={"ID":"ac0da642-81fc-4cf5-9933-210cf0f17ba9","Type":"ContainerDied","Data":"cdec3e1aaf264950ef9968c84c52460a5c670142cc7dae8ff6c3fd185f9f1540"} Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.847410 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdec3e1aaf264950ef9968c84c52460a5c670142cc7dae8ff6c3fd185f9f1540" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.849287 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-669f-account-create-update-rk72d" event={"ID":"e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f","Type":"ContainerDied","Data":"e80115dd71199c192edfa4af7cb2c4058ef56f82a7478e6591db50e57013b5b8"} Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.849323 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e80115dd71199c192edfa4af7cb2c4058ef56f82a7478e6591db50e57013b5b8" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.849379 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-669f-account-create-update-rk72d" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.875368 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b48d58c48-rvvcb" event={"ID":"ea66403d-a189-495c-9067-18571e929874","Type":"ContainerStarted","Data":"7bddaad10ce4887ebdf60346c63ab1b8d10079788fd64dfb77756bf3a7ddc99b"} Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.879251 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-njmck" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.879293 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-njmck" event={"ID":"5e43edca-f702-4b9f-b8be-f95bda7b7a1e","Type":"ContainerDied","Data":"8743cd55db071468d2353ff0bc947905113c3150fffafdb3da5add1ec4a49ae8"} Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.879336 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8743cd55db071468d2353ff0bc947905113c3150fffafdb3da5add1ec4a49ae8" Feb 03 06:19:52 crc kubenswrapper[4872]: I0203 06:19:52.907755 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 06:19:53 crc kubenswrapper[4872]: I0203 06:19:53.671390 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:53 crc kubenswrapper[4872]: I0203 06:19:53.672736 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f5458fb75-k8gpr" Feb 03 06:19:53 crc kubenswrapper[4872]: I0203 06:19:53.894432 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerStarted","Data":"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b"} Feb 03 06:19:53 crc kubenswrapper[4872]: I0203 06:19:53.894473 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerStarted","Data":"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30"} Feb 03 06:19:54 crc kubenswrapper[4872]: I0203 06:19:54.906220 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerStarted","Data":"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1"} Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.126251 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-z48z4"] Feb 03 06:19:57 crc kubenswrapper[4872]: E0203 06:19:57.128351 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.128437 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: E0203 06:19:57.128501 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e43edca-f702-4b9f-b8be-f95bda7b7a1e" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.128561 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e43edca-f702-4b9f-b8be-f95bda7b7a1e" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: E0203 06:19:57.128621 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8ac6be-5bf4-4866-b8c7-073a00d94310" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.128673 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8ac6be-5bf4-4866-b8c7-073a00d94310" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: E0203 06:19:57.128750 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0da642-81fc-4cf5-9933-210cf0f17ba9" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.128804 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0da642-81fc-4cf5-9933-210cf0f17ba9" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: E0203 06:19:57.128862 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6dd5da-4798-4300-8752-0eafdd05cf40" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.128916 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6dd5da-4798-4300-8752-0eafdd05cf40" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: E0203 06:19:57.128992 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49628158-8aeb-4512-9585-91db75925666" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.129047 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="49628158-8aeb-4512-9585-91db75925666" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.129274 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0da642-81fc-4cf5-9933-210cf0f17ba9" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.129337 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8ac6be-5bf4-4866-b8c7-073a00d94310" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.129400 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff6dd5da-4798-4300-8752-0eafdd05cf40" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.129465 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f" containerName="mariadb-account-create-update" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.129521 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="49628158-8aeb-4512-9585-91db75925666" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.129582 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e43edca-f702-4b9f-b8be-f95bda7b7a1e" containerName="mariadb-database-create" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.130363 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.136675 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.140351 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tz2lz" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.140358 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.150841 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-z48z4"] Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.259083 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.259134 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-config-data\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.259175 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f29g\" (UniqueName: \"kubernetes.io/projected/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-kube-api-access-7f29g\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.259244 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-scripts\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.360420 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.360471 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-config-data\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.360505 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f29g\" (UniqueName: \"kubernetes.io/projected/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-kube-api-access-7f29g\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.360582 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-scripts\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.373434 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.373860 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-scripts\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.373907 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-config-data\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.388263 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f29g\" (UniqueName: \"kubernetes.io/projected/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-kube-api-access-7f29g\") pod \"nova-cell0-conductor-db-sync-z48z4\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:57 crc kubenswrapper[4872]: I0203 06:19:57.478171 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:19:58 crc kubenswrapper[4872]: I0203 06:19:58.142432 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84b5664f65-lkwpl" Feb 03 06:19:58 crc kubenswrapper[4872]: I0203 06:19:58.215209 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fb67d557b-lcx8h"] Feb 03 06:19:58 crc kubenswrapper[4872]: I0203 06:19:58.215497 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fb67d557b-lcx8h" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerName="neutron-api" containerID="cri-o://3c062693bb24ee56336c9c2463c9530f815effc468d340d54d6d625073d6ecda" gracePeriod=30 Feb 03 06:19:58 crc kubenswrapper[4872]: I0203 06:19:58.215821 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fb67d557b-lcx8h" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerName="neutron-httpd" containerID="cri-o://e416fa1d39f57e8f200bd6f2fc35376c79265cec30a52203cec574146c5e1330" gracePeriod=30 Feb 03 06:19:58 crc kubenswrapper[4872]: I0203 06:19:58.980840 4872 generic.go:334] "Generic (PLEG): container finished" podID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerID="e416fa1d39f57e8f200bd6f2fc35376c79265cec30a52203cec574146c5e1330" exitCode=0 Feb 03 06:19:58 crc kubenswrapper[4872]: I0203 06:19:58.981113 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fb67d557b-lcx8h" event={"ID":"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9","Type":"ContainerDied","Data":"e416fa1d39f57e8f200bd6f2fc35376c79265cec30a52203cec574146c5e1330"} Feb 03 06:20:00 crc kubenswrapper[4872]: I0203 06:20:00.721230 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:20:00 crc kubenswrapper[4872]: I0203 06:20:00.722284 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:20:00 crc kubenswrapper[4872]: I0203 06:20:00.887424 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:20:00 crc kubenswrapper[4872]: I0203 06:20:00.887461 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:20:01 crc kubenswrapper[4872]: I0203 06:20:01.002012 4872 generic.go:334] "Generic (PLEG): container finished" podID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerID="3c062693bb24ee56336c9c2463c9530f815effc468d340d54d6d625073d6ecda" exitCode=0 Feb 03 06:20:01 crc kubenswrapper[4872]: I0203 06:20:01.002196 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fb67d557b-lcx8h" event={"ID":"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9","Type":"ContainerDied","Data":"3c062693bb24ee56336c9c2463c9530f815effc468d340d54d6d625073d6ecda"} Feb 03 06:20:01 crc kubenswrapper[4872]: I0203 06:20:01.980837 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.651414 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.705171 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-config\") pod \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.705497 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-ovndb-tls-certs\") pod \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.705801 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-httpd-config\") pod \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.705828 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx6n5\" (UniqueName: \"kubernetes.io/projected/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-kube-api-access-xx6n5\") pod \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.705856 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-combined-ca-bundle\") pod \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\" (UID: \"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9\") " Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.715319 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-kube-api-access-xx6n5" (OuterVolumeSpecName: "kube-api-access-xx6n5") pod "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" (UID: "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9"). InnerVolumeSpecName "kube-api-access-xx6n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.719834 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" (UID: "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.761847 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-config" (OuterVolumeSpecName: "config") pod "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" (UID: "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.789601 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" (UID: "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.796860 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" (UID: "2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.810635 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx6n5\" (UniqueName: \"kubernetes.io/projected/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-kube-api-access-xx6n5\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.810667 4872 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.810677 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.810710 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.810720 4872 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:04 crc kubenswrapper[4872]: I0203 06:20:04.898453 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-z48z4"] Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.039483 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67","Type":"ContainerStarted","Data":"2d97254a4fcbfbea31d35a7c49a470363fa1393ad17ad56c352577162793e28a"} Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.042117 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fb67d557b-lcx8h" event={"ID":"2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9","Type":"ContainerDied","Data":"982f5608d4262266ff8e40ca7f2115b8048ce8a8f1c185ad1718c799aff0e756"} Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.042155 4872 scope.go:117] "RemoveContainer" containerID="e416fa1d39f57e8f200bd6f2fc35376c79265cec30a52203cec574146c5e1330" Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.042285 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fb67d557b-lcx8h" Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.087605 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerStarted","Data":"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9"} Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.087806 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="ceilometer-central-agent" containerID="cri-o://1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30" gracePeriod=30 Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.088051 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.088310 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="proxy-httpd" containerID="cri-o://cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9" gracePeriod=30 Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.088373 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="sg-core" containerID="cri-o://08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1" gracePeriod=30 Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.088413 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="ceilometer-notification-agent" containerID="cri-o://d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b" gracePeriod=30 Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.097871 4872 scope.go:117] "RemoveContainer" containerID="3c062693bb24ee56336c9c2463c9530f815effc468d340d54d6d625073d6ecda" Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.100889 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-z48z4" event={"ID":"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd","Type":"ContainerStarted","Data":"5b61fa1c66e4b68f6e3019aad1a7cad929f27145eabd325185428c0225853adb"} Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.117044 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.102319578 podStartE2EDuration="25.11702843s" podCreationTimestamp="2026-02-03 06:19:40 +0000 UTC" firstStartedPulling="2026-02-03 06:19:41.293846721 +0000 UTC m=+1151.876538135" lastFinishedPulling="2026-02-03 06:20:04.308555573 +0000 UTC m=+1174.891246987" observedRunningTime="2026-02-03 06:20:05.081816313 +0000 UTC m=+1175.664507717" watchObservedRunningTime="2026-02-03 06:20:05.11702843 +0000 UTC m=+1175.699719834" Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.117805 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.144726239 podStartE2EDuration="15.117800029s" podCreationTimestamp="2026-02-03 06:19:50 +0000 UTC" firstStartedPulling="2026-02-03 06:19:52.286566552 +0000 UTC m=+1162.869257966" lastFinishedPulling="2026-02-03 06:20:04.259640342 +0000 UTC m=+1174.842331756" observedRunningTime="2026-02-03 06:20:05.108347669 +0000 UTC m=+1175.691039083" watchObservedRunningTime="2026-02-03 06:20:05.117800029 +0000 UTC m=+1175.700491433" Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.151725 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fb67d557b-lcx8h"] Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.158214 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6fb67d557b-lcx8h"] Feb 03 06:20:05 crc kubenswrapper[4872]: I0203 06:20:05.953589 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.037360 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-config-data\") pod \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.037404 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnscq\" (UniqueName: \"kubernetes.io/projected/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-kube-api-access-gnscq\") pod \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.037466 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-sg-core-conf-yaml\") pod \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.037494 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-run-httpd\") pod \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.037581 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-scripts\") pod \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.037621 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-log-httpd\") pod \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.037670 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-combined-ca-bundle\") pod \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\" (UID: \"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda\") " Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.040016 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" (UID: "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.040371 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" (UID: "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.049169 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-kube-api-access-gnscq" (OuterVolumeSpecName: "kube-api-access-gnscq") pod "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" (UID: "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda"). InnerVolumeSpecName "kube-api-access-gnscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.060671 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-scripts" (OuterVolumeSpecName: "scripts") pod "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" (UID: "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.083867 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" (UID: "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.142062 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnscq\" (UniqueName: \"kubernetes.io/projected/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-kube-api-access-gnscq\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.142286 4872 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.143489 4872 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.143560 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.143616 4872 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.153231 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" path="/var/lib/kubelet/pods/2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9/volumes" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.157780 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" (UID: "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.179258 4872 generic.go:334] "Generic (PLEG): container finished" podID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerID="cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9" exitCode=0 Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.179288 4872 generic.go:334] "Generic (PLEG): container finished" podID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerID="08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1" exitCode=2 Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.179297 4872 generic.go:334] "Generic (PLEG): container finished" podID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerID="d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b" exitCode=0 Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.179304 4872 generic.go:334] "Generic (PLEG): container finished" podID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerID="1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30" exitCode=0 Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.180052 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.209712 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-config-data" (OuterVolumeSpecName: "config-data") pod "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" (UID: "0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.245495 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.245528 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.258102 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerDied","Data":"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9"} Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.258349 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerDied","Data":"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1"} Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.258857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerDied","Data":"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b"} Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.258969 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerDied","Data":"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30"} Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.259057 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda","Type":"ContainerDied","Data":"6ee599bfdbef59f37f08e4cb0f6bd76a0a05dbf4b497d25d7093965e53517418"} Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.258500 4872 scope.go:117] "RemoveContainer" containerID="cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.288664 4872 scope.go:117] "RemoveContainer" containerID="08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.309501 4872 scope.go:117] "RemoveContainer" containerID="d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.357467 4872 scope.go:117] "RemoveContainer" containerID="1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.396808 4872 scope.go:117] "RemoveContainer" containerID="cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.397200 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": container with ID starting with cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9 not found: ID does not exist" containerID="cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.397237 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9"} err="failed to get container status \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": rpc error: code = NotFound desc = could not find container \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": container with ID starting with cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.397263 4872 scope.go:117] "RemoveContainer" containerID="08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.397629 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": container with ID starting with 08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1 not found: ID does not exist" containerID="08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.397661 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1"} err="failed to get container status \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": rpc error: code = NotFound desc = could not find container \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": container with ID starting with 08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.397722 4872 scope.go:117] "RemoveContainer" containerID="d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.398730 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": container with ID starting with d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b not found: ID does not exist" containerID="d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.398771 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b"} err="failed to get container status \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": rpc error: code = NotFound desc = could not find container \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": container with ID starting with d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.398797 4872 scope.go:117] "RemoveContainer" containerID="1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.399169 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": container with ID starting with 1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30 not found: ID does not exist" containerID="1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.399188 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30"} err="failed to get container status \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": rpc error: code = NotFound desc = could not find container \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": container with ID starting with 1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.399202 4872 scope.go:117] "RemoveContainer" containerID="cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.399466 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9"} err="failed to get container status \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": rpc error: code = NotFound desc = could not find container \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": container with ID starting with cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.399480 4872 scope.go:117] "RemoveContainer" containerID="08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.399953 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1"} err="failed to get container status \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": rpc error: code = NotFound desc = could not find container \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": container with ID starting with 08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.399968 4872 scope.go:117] "RemoveContainer" containerID="d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.400796 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b"} err="failed to get container status \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": rpc error: code = NotFound desc = could not find container \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": container with ID starting with d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.400826 4872 scope.go:117] "RemoveContainer" containerID="1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.401135 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30"} err="failed to get container status \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": rpc error: code = NotFound desc = could not find container \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": container with ID starting with 1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.401155 4872 scope.go:117] "RemoveContainer" containerID="cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.401447 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9"} err="failed to get container status \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": rpc error: code = NotFound desc = could not find container \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": container with ID starting with cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.401469 4872 scope.go:117] "RemoveContainer" containerID="08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.401624 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1"} err="failed to get container status \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": rpc error: code = NotFound desc = could not find container \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": container with ID starting with 08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.401643 4872 scope.go:117] "RemoveContainer" containerID="d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.401885 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b"} err="failed to get container status \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": rpc error: code = NotFound desc = could not find container \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": container with ID starting with d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.401903 4872 scope.go:117] "RemoveContainer" containerID="1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.402049 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30"} err="failed to get container status \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": rpc error: code = NotFound desc = could not find container \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": container with ID starting with 1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.402067 4872 scope.go:117] "RemoveContainer" containerID="cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.402285 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9"} err="failed to get container status \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": rpc error: code = NotFound desc = could not find container \"cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9\": container with ID starting with cd790cda7fa866dd7df14ab4e8a57d905e50d1c709df330a2824ef568bb4bfe9 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.402302 4872 scope.go:117] "RemoveContainer" containerID="08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.402539 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1"} err="failed to get container status \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": rpc error: code = NotFound desc = could not find container \"08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1\": container with ID starting with 08b220369d5d513254d24519e04d3f219752d2e9f8f3e742c260863e19bad5c1 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.402557 4872 scope.go:117] "RemoveContainer" containerID="d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.402776 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b"} err="failed to get container status \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": rpc error: code = NotFound desc = could not find container \"d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b\": container with ID starting with d558882b2a82e14b7d5f96bad754eda6d9f15a4f8ee0a0bc02341dc0f0602a9b not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.402803 4872 scope.go:117] "RemoveContainer" containerID="1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.403014 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30"} err="failed to get container status \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": rpc error: code = NotFound desc = could not find container \"1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30\": container with ID starting with 1c862ea404ece71ade20e79444c9e757ee46a7b9659c8f3b88532984cae94b30 not found: ID does not exist" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.521928 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.532274 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549275 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.549665 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerName="neutron-api" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549695 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerName="neutron-api" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.549705 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerName="neutron-httpd" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549711 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerName="neutron-httpd" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.549719 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="sg-core" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549725 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="sg-core" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.549751 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="ceilometer-notification-agent" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549758 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="ceilometer-notification-agent" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.549768 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="proxy-httpd" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549775 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="proxy-httpd" Feb 03 06:20:06 crc kubenswrapper[4872]: E0203 06:20:06.549784 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="ceilometer-central-agent" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549790 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="ceilometer-central-agent" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549966 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerName="neutron-httpd" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549976 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="proxy-httpd" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.549986 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="ceilometer-central-agent" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.550002 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="ceilometer-notification-agent" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.550012 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cdef5a9-2c9b-4e8b-8527-15aa47d46bf9" containerName="neutron-api" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.550024 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" containerName="sg-core" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.552559 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.558531 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.560339 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.564875 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.654045 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-config-data\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.654076 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.654113 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-scripts\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.654302 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-run-httpd\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.655095 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.655247 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvm56\" (UniqueName: \"kubernetes.io/projected/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-kube-api-access-gvm56\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.655293 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-log-httpd\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.756871 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.756940 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvm56\" (UniqueName: \"kubernetes.io/projected/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-kube-api-access-gvm56\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.756964 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-log-httpd\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.757001 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-config-data\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.757018 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.757052 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-scripts\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.757092 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-run-httpd\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.757833 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-run-httpd\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.758114 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-log-httpd\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.762193 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-scripts\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.765790 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.770676 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.770740 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-config-data\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.784449 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvm56\" (UniqueName: \"kubernetes.io/projected/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-kube-api-access-gvm56\") pod \"ceilometer-0\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " pod="openstack/ceilometer-0" Feb 03 06:20:06 crc kubenswrapper[4872]: I0203 06:20:06.872099 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:20:07 crc kubenswrapper[4872]: I0203 06:20:07.442880 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:08 crc kubenswrapper[4872]: I0203 06:20:08.046632 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:08 crc kubenswrapper[4872]: I0203 06:20:08.137369 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda" path="/var/lib/kubelet/pods/0bfe90a5-28a2-4e8f-a9d5-5eabb944bfda/volumes" Feb 03 06:20:08 crc kubenswrapper[4872]: I0203 06:20:08.204167 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerStarted","Data":"55c6d0bfe7bf970202a1329e71aaf3c1505d65e7ade26e3c018ec7dc11fa4e0d"} Feb 03 06:20:09 crc kubenswrapper[4872]: I0203 06:20:09.214989 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerStarted","Data":"25bc4abcd3e8657bb965b10355597470032c487d08ee20c109fcd0e8bf34876f"} Feb 03 06:20:10 crc kubenswrapper[4872]: I0203 06:20:10.726118 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 06:20:10 crc kubenswrapper[4872]: I0203 06:20:10.889275 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57dc94599b-bvf7j" podUID="f475ab66-31e6-46da-ad2e-8e8279e33b68" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 06:20:16 crc kubenswrapper[4872]: I0203 06:20:16.451525 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:20:16 crc kubenswrapper[4872]: I0203 06:20:16.452451 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerName="glance-log" containerID="cri-o://6ad87165d548c13c101ca2d9191279df4398ffd900336678a80732ca52b40462" gracePeriod=30 Feb 03 06:20:16 crc kubenswrapper[4872]: I0203 06:20:16.452607 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerName="glance-httpd" containerID="cri-o://6be78455df4cde88131ca45af76207cd1e7de9daa4a8ccf672e63fc2b023731a" gracePeriod=30 Feb 03 06:20:17 crc kubenswrapper[4872]: I0203 06:20:17.284381 4872 generic.go:334] "Generic (PLEG): container finished" podID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerID="6ad87165d548c13c101ca2d9191279df4398ffd900336678a80732ca52b40462" exitCode=143 Feb 03 06:20:17 crc kubenswrapper[4872]: I0203 06:20:17.284424 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c71f7574-20a8-448b-bb5e-af62f7311a08","Type":"ContainerDied","Data":"6ad87165d548c13c101ca2d9191279df4398ffd900336678a80732ca52b40462"} Feb 03 06:20:17 crc kubenswrapper[4872]: I0203 06:20:17.812921 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:20:17 crc kubenswrapper[4872]: I0203 06:20:17.813431 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerName="glance-log" containerID="cri-o://af3d526bad9ca30006bcc071fd6488403ee84c207b299a9450b2f59cdf5e105c" gracePeriod=30 Feb 03 06:20:17 crc kubenswrapper[4872]: I0203 06:20:17.813547 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerName="glance-httpd" containerID="cri-o://a1bf2e37456770e9e654d093234729ded06287d191d8b32b2ee6b68adf601d1d" gracePeriod=30 Feb 03 06:20:18 crc kubenswrapper[4872]: I0203 06:20:18.300256 4872 generic.go:334] "Generic (PLEG): container finished" podID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerID="af3d526bad9ca30006bcc071fd6488403ee84c207b299a9450b2f59cdf5e105c" exitCode=143 Feb 03 06:20:18 crc kubenswrapper[4872]: I0203 06:20:18.300309 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4858d6ef-a1ea-4b48-ad9c-0a56860e802b","Type":"ContainerDied","Data":"af3d526bad9ca30006bcc071fd6488403ee84c207b299a9450b2f59cdf5e105c"} Feb 03 06:20:19 crc kubenswrapper[4872]: I0203 06:20:19.309865 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerStarted","Data":"753de1fbcab9eec927071fad5087edb3e06a3a51d11f6af06feb3eabcba20b26"} Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.341549 4872 generic.go:334] "Generic (PLEG): container finished" podID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerID="6be78455df4cde88131ca45af76207cd1e7de9daa4a8ccf672e63fc2b023731a" exitCode=0 Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.341916 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c71f7574-20a8-448b-bb5e-af62f7311a08","Type":"ContainerDied","Data":"6be78455df4cde88131ca45af76207cd1e7de9daa4a8ccf672e63fc2b023731a"} Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.341943 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c71f7574-20a8-448b-bb5e-af62f7311a08","Type":"ContainerDied","Data":"0a1af5817a9684688536c45699f8b8607c6b14476a652fbdae59929456738a45"} Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.341954 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a1af5817a9684688536c45699f8b8607c6b14476a652fbdae59929456738a45" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.352586 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-z48z4" event={"ID":"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd","Type":"ContainerStarted","Data":"3fbd1bcf2ec29bf66ac488f8c5866d938b3728cc6a13069ffe31a1b16de30844"} Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.372618 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-z48z4" podStartSLOduration=9.007836552 podStartE2EDuration="23.372597485s" podCreationTimestamp="2026-02-03 06:19:57 +0000 UTC" firstStartedPulling="2026-02-03 06:20:04.899511436 +0000 UTC m=+1175.482202850" lastFinishedPulling="2026-02-03 06:20:19.264272369 +0000 UTC m=+1189.846963783" observedRunningTime="2026-02-03 06:20:20.371439386 +0000 UTC m=+1190.954130800" watchObservedRunningTime="2026-02-03 06:20:20.372597485 +0000 UTC m=+1190.955288899" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.416345 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.526331 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-scripts\") pod \"c71f7574-20a8-448b-bb5e-af62f7311a08\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.526612 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6mwx\" (UniqueName: \"kubernetes.io/projected/c71f7574-20a8-448b-bb5e-af62f7311a08-kube-api-access-q6mwx\") pod \"c71f7574-20a8-448b-bb5e-af62f7311a08\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.526754 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-logs\") pod \"c71f7574-20a8-448b-bb5e-af62f7311a08\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.526808 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-combined-ca-bundle\") pod \"c71f7574-20a8-448b-bb5e-af62f7311a08\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.526858 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-httpd-run\") pod \"c71f7574-20a8-448b-bb5e-af62f7311a08\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.526893 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-public-tls-certs\") pod \"c71f7574-20a8-448b-bb5e-af62f7311a08\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.526996 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-config-data\") pod \"c71f7574-20a8-448b-bb5e-af62f7311a08\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.527021 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"c71f7574-20a8-448b-bb5e-af62f7311a08\" (UID: \"c71f7574-20a8-448b-bb5e-af62f7311a08\") " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.531957 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-logs" (OuterVolumeSpecName: "logs") pod "c71f7574-20a8-448b-bb5e-af62f7311a08" (UID: "c71f7574-20a8-448b-bb5e-af62f7311a08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.532298 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c71f7574-20a8-448b-bb5e-af62f7311a08" (UID: "c71f7574-20a8-448b-bb5e-af62f7311a08"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.537080 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-scripts" (OuterVolumeSpecName: "scripts") pod "c71f7574-20a8-448b-bb5e-af62f7311a08" (UID: "c71f7574-20a8-448b-bb5e-af62f7311a08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.541952 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "c71f7574-20a8-448b-bb5e-af62f7311a08" (UID: "c71f7574-20a8-448b-bb5e-af62f7311a08"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.553499 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71f7574-20a8-448b-bb5e-af62f7311a08-kube-api-access-q6mwx" (OuterVolumeSpecName: "kube-api-access-q6mwx") pod "c71f7574-20a8-448b-bb5e-af62f7311a08" (UID: "c71f7574-20a8-448b-bb5e-af62f7311a08"). InnerVolumeSpecName "kube-api-access-q6mwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.573702 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c71f7574-20a8-448b-bb5e-af62f7311a08" (UID: "c71f7574-20a8-448b-bb5e-af62f7311a08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.630792 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.630974 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.631309 4872 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c71f7574-20a8-448b-bb5e-af62f7311a08-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.631393 4872 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.631452 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.631516 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6mwx\" (UniqueName: \"kubernetes.io/projected/c71f7574-20a8-448b-bb5e-af62f7311a08-kube-api-access-q6mwx\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.655105 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-config-data" (OuterVolumeSpecName: "config-data") pod "c71f7574-20a8-448b-bb5e-af62f7311a08" (UID: "c71f7574-20a8-448b-bb5e-af62f7311a08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.661396 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c71f7574-20a8-448b-bb5e-af62f7311a08" (UID: "c71f7574-20a8-448b-bb5e-af62f7311a08"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.668282 4872 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.733791 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.733824 4872 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:20 crc kubenswrapper[4872]: I0203 06:20:20.733834 4872 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c71f7574-20a8-448b-bb5e-af62f7311a08-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.362677 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerStarted","Data":"6aa2094699e7a97240fa2afbc6511fca4a04ba50a205d68fa91e0696ded391b3"} Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.366527 4872 generic.go:334] "Generic (PLEG): container finished" podID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerID="a1bf2e37456770e9e654d093234729ded06287d191d8b32b2ee6b68adf601d1d" exitCode=0 Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.366572 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4858d6ef-a1ea-4b48-ad9c-0a56860e802b","Type":"ContainerDied","Data":"a1bf2e37456770e9e654d093234729ded06287d191d8b32b2ee6b68adf601d1d"} Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.366864 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.399757 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.410231 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.431590 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:20:21 crc kubenswrapper[4872]: E0203 06:20:21.435973 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerName="glance-httpd" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.435998 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerName="glance-httpd" Feb 03 06:20:21 crc kubenswrapper[4872]: E0203 06:20:21.436013 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerName="glance-log" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.436019 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerName="glance-log" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.436192 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerName="glance-httpd" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.436212 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" containerName="glance-log" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.437100 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.440898 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.441034 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.450420 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.549601 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dj4k\" (UniqueName: \"kubernetes.io/projected/71811df4-e41d-4e6b-a94c-81e871e39632-kube-api-access-7dj4k\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.549646 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71811df4-e41d-4e6b-a94c-81e871e39632-logs\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.549669 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-config-data\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.549706 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-scripts\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.549729 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.549754 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.549795 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71811df4-e41d-4e6b-a94c-81e871e39632-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.549821 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.651102 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-config-data\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.651163 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-scripts\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.651239 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.651274 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.651350 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71811df4-e41d-4e6b-a94c-81e871e39632-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.651402 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.651527 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dj4k\" (UniqueName: \"kubernetes.io/projected/71811df4-e41d-4e6b-a94c-81e871e39632-kube-api-access-7dj4k\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.651587 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71811df4-e41d-4e6b-a94c-81e871e39632-logs\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.652240 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71811df4-e41d-4e6b-a94c-81e871e39632-logs\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.655780 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.656997 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71811df4-e41d-4e6b-a94c-81e871e39632-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.666028 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-config-data\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.688441 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.697270 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.697543 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71811df4-e41d-4e6b-a94c-81e871e39632-scripts\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.714620 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dj4k\" (UniqueName: \"kubernetes.io/projected/71811df4-e41d-4e6b-a94c-81e871e39632-kube-api-access-7dj4k\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.753135 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"71811df4-e41d-4e6b-a94c-81e871e39632\") " pod="openstack/glance-default-external-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.805924 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.990568 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-internal-tls-certs\") pod \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.990620 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-config-data\") pod \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.990644 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24jm9\" (UniqueName: \"kubernetes.io/projected/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-kube-api-access-24jm9\") pod \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.990708 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-httpd-run\") pod \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.990809 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-combined-ca-bundle\") pod \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.990838 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-logs\") pod \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.990973 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.990995 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-scripts\") pod \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\" (UID: \"4858d6ef-a1ea-4b48-ad9c-0a56860e802b\") " Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.995916 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4858d6ef-a1ea-4b48-ad9c-0a56860e802b" (UID: "4858d6ef-a1ea-4b48-ad9c-0a56860e802b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:20:21 crc kubenswrapper[4872]: I0203 06:20:21.996033 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-logs" (OuterVolumeSpecName: "logs") pod "4858d6ef-a1ea-4b48-ad9c-0a56860e802b" (UID: "4858d6ef-a1ea-4b48-ad9c-0a56860e802b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.004771 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-scripts" (OuterVolumeSpecName: "scripts") pod "4858d6ef-a1ea-4b48-ad9c-0a56860e802b" (UID: "4858d6ef-a1ea-4b48-ad9c-0a56860e802b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.006367 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-kube-api-access-24jm9" (OuterVolumeSpecName: "kube-api-access-24jm9") pod "4858d6ef-a1ea-4b48-ad9c-0a56860e802b" (UID: "4858d6ef-a1ea-4b48-ad9c-0a56860e802b"). InnerVolumeSpecName "kube-api-access-24jm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.044945 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "4858d6ef-a1ea-4b48-ad9c-0a56860e802b" (UID: "4858d6ef-a1ea-4b48-ad9c-0a56860e802b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.052150 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.070187 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4858d6ef-a1ea-4b48-ad9c-0a56860e802b" (UID: "4858d6ef-a1ea-4b48-ad9c-0a56860e802b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.094450 4872 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.094481 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.094490 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24jm9\" (UniqueName: \"kubernetes.io/projected/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-kube-api-access-24jm9\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.094499 4872 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.094507 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.094515 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.134625 4872 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.140915 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4858d6ef-a1ea-4b48-ad9c-0a56860e802b" (UID: "4858d6ef-a1ea-4b48-ad9c-0a56860e802b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.141081 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-config-data" (OuterVolumeSpecName: "config-data") pod "4858d6ef-a1ea-4b48-ad9c-0a56860e802b" (UID: "4858d6ef-a1ea-4b48-ad9c-0a56860e802b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.148832 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c71f7574-20a8-448b-bb5e-af62f7311a08" path="/var/lib/kubelet/pods/c71f7574-20a8-448b-bb5e-af62f7311a08/volumes" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.196958 4872 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.196986 4872 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.196999 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4858d6ef-a1ea-4b48-ad9c-0a56860e802b-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.388422 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4858d6ef-a1ea-4b48-ad9c-0a56860e802b","Type":"ContainerDied","Data":"6077b9f6ee10b2cbb56411c44ac1dd7be88243893004fbe6b56d6e7d358d5049"} Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.388676 4872 scope.go:117] "RemoveContainer" containerID="a1bf2e37456770e9e654d093234729ded06287d191d8b32b2ee6b68adf601d1d" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.388481 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.423443 4872 scope.go:117] "RemoveContainer" containerID="af3d526bad9ca30006bcc071fd6488403ee84c207b299a9450b2f59cdf5e105c" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.429968 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.452893 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.471908 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:20:22 crc kubenswrapper[4872]: E0203 06:20:22.472288 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerName="glance-log" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.472298 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerName="glance-log" Feb 03 06:20:22 crc kubenswrapper[4872]: E0203 06:20:22.472312 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerName="glance-httpd" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.472317 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerName="glance-httpd" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.472475 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerName="glance-httpd" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.472488 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" containerName="glance-log" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.473798 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.480289 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.480914 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.489532 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.609552 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beba659d-d168-47b7-a0ee-f467101ed286-logs\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.609619 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.609637 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/beba659d-d168-47b7-a0ee-f467101ed286-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.609828 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.609850 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-scripts\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.609874 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.609897 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfv8g\" (UniqueName: \"kubernetes.io/projected/beba659d-d168-47b7-a0ee-f467101ed286-kube-api-access-jfv8g\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.609974 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-config-data\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.706170 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.713484 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-config-data\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.713782 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beba659d-d168-47b7-a0ee-f467101ed286-logs\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.713811 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.713828 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/beba659d-d168-47b7-a0ee-f467101ed286-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.713875 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.713891 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-scripts\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.713913 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.713932 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfv8g\" (UniqueName: \"kubernetes.io/projected/beba659d-d168-47b7-a0ee-f467101ed286-kube-api-access-jfv8g\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.714812 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.714931 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beba659d-d168-47b7-a0ee-f467101ed286-logs\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.715160 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/beba659d-d168-47b7-a0ee-f467101ed286-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.725152 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-scripts\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.725380 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.734593 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.735406 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beba659d-d168-47b7-a0ee-f467101ed286-config-data\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.743519 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfv8g\" (UniqueName: \"kubernetes.io/projected/beba659d-d168-47b7-a0ee-f467101ed286-kube-api-access-jfv8g\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.754046 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"beba659d-d168-47b7-a0ee-f467101ed286\") " pod="openstack/glance-default-internal-api-0" Feb 03 06:20:22 crc kubenswrapper[4872]: I0203 06:20:22.802262 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.403068 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.416117 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71811df4-e41d-4e6b-a94c-81e871e39632","Type":"ContainerStarted","Data":"5fe49e79d792b9ffa117fafa6f01315bc284401797db2a5df0fb100fd760faa5"} Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.420203 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerStarted","Data":"674043476d243ae5a686485afd5c876110657d2e975fae0bc284b321e28ecf9b"} Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.420352 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="ceilometer-central-agent" containerID="cri-o://25bc4abcd3e8657bb965b10355597470032c487d08ee20c109fcd0e8bf34876f" gracePeriod=30 Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.420611 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.420871 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="proxy-httpd" containerID="cri-o://674043476d243ae5a686485afd5c876110657d2e975fae0bc284b321e28ecf9b" gracePeriod=30 Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.420883 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="ceilometer-notification-agent" containerID="cri-o://753de1fbcab9eec927071fad5087edb3e06a3a51d11f6af06feb3eabcba20b26" gracePeriod=30 Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.420925 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="sg-core" containerID="cri-o://6aa2094699e7a97240fa2afbc6511fca4a04ba50a205d68fa91e0696ded391b3" gracePeriod=30 Feb 03 06:20:23 crc kubenswrapper[4872]: I0203 06:20:23.465816 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.914042066 podStartE2EDuration="17.465800699s" podCreationTimestamp="2026-02-03 06:20:06 +0000 UTC" firstStartedPulling="2026-02-03 06:20:07.500738007 +0000 UTC m=+1178.083429421" lastFinishedPulling="2026-02-03 06:20:23.05249664 +0000 UTC m=+1193.635188054" observedRunningTime="2026-02-03 06:20:23.459186538 +0000 UTC m=+1194.041877952" watchObservedRunningTime="2026-02-03 06:20:23.465800699 +0000 UTC m=+1194.048492113" Feb 03 06:20:24 crc kubenswrapper[4872]: I0203 06:20:24.141219 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4858d6ef-a1ea-4b48-ad9c-0a56860e802b" path="/var/lib/kubelet/pods/4858d6ef-a1ea-4b48-ad9c-0a56860e802b/volumes" Feb 03 06:20:24 crc kubenswrapper[4872]: I0203 06:20:24.431262 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"beba659d-d168-47b7-a0ee-f467101ed286","Type":"ContainerStarted","Data":"55670c01c0ed62e26a08bbb17e9b2505e72397c80206cda810333bd4ec507741"} Feb 03 06:20:24 crc kubenswrapper[4872]: I0203 06:20:24.431302 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"beba659d-d168-47b7-a0ee-f467101ed286","Type":"ContainerStarted","Data":"4b9cce8679a647c7faadece62552742c4958030db344e5a4e5d7de71a6db2fa2"} Feb 03 06:20:24 crc kubenswrapper[4872]: I0203 06:20:24.444219 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71811df4-e41d-4e6b-a94c-81e871e39632","Type":"ContainerStarted","Data":"e363150e46e8c6cf2f1040c421503deac37a4c09907ec4cde497e8091a479ec8"} Feb 03 06:20:24 crc kubenswrapper[4872]: I0203 06:20:24.447198 4872 generic.go:334] "Generic (PLEG): container finished" podID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerID="6aa2094699e7a97240fa2afbc6511fca4a04ba50a205d68fa91e0696ded391b3" exitCode=2 Feb 03 06:20:24 crc kubenswrapper[4872]: I0203 06:20:24.447219 4872 generic.go:334] "Generic (PLEG): container finished" podID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerID="753de1fbcab9eec927071fad5087edb3e06a3a51d11f6af06feb3eabcba20b26" exitCode=0 Feb 03 06:20:24 crc kubenswrapper[4872]: I0203 06:20:24.447233 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerDied","Data":"6aa2094699e7a97240fa2afbc6511fca4a04ba50a205d68fa91e0696ded391b3"} Feb 03 06:20:24 crc kubenswrapper[4872]: I0203 06:20:24.447250 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerDied","Data":"753de1fbcab9eec927071fad5087edb3e06a3a51d11f6af06feb3eabcba20b26"} Feb 03 06:20:25 crc kubenswrapper[4872]: I0203 06:20:25.482061 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71811df4-e41d-4e6b-a94c-81e871e39632","Type":"ContainerStarted","Data":"5287a7862726d5aa1f512a28a2f247010c90b6c7945d73f02d1e7a9166c8adef"} Feb 03 06:20:25 crc kubenswrapper[4872]: I0203 06:20:25.490235 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"beba659d-d168-47b7-a0ee-f467101ed286","Type":"ContainerStarted","Data":"71ec41377bfb57e0505aa84c1c2869b2256b248914e60d0164f00b51a7a797a9"} Feb 03 06:20:25 crc kubenswrapper[4872]: I0203 06:20:25.508319 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.508291882 podStartE2EDuration="4.508291882s" podCreationTimestamp="2026-02-03 06:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:20:25.504260133 +0000 UTC m=+1196.086951577" watchObservedRunningTime="2026-02-03 06:20:25.508291882 +0000 UTC m=+1196.090983326" Feb 03 06:20:25 crc kubenswrapper[4872]: I0203 06:20:25.538714 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.538662501 podStartE2EDuration="3.538662501s" podCreationTimestamp="2026-02-03 06:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:20:25.53249694 +0000 UTC m=+1196.115188374" watchObservedRunningTime="2026-02-03 06:20:25.538662501 +0000 UTC m=+1196.121353925" Feb 03 06:20:25 crc kubenswrapper[4872]: I0203 06:20:25.727887 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:20:25 crc kubenswrapper[4872]: I0203 06:20:25.893898 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57dc94599b-bvf7j" podUID="f475ab66-31e6-46da-ad2e-8e8279e33b68" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.052981 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.053572 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.120117 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.136551 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.461342 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.549880 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.552155 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.552201 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.802855 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.802895 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.832663 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:32 crc kubenswrapper[4872]: I0203 06:20:32.880199 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:33 crc kubenswrapper[4872]: I0203 06:20:33.565775 4872 generic.go:334] "Generic (PLEG): container finished" podID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerID="25bc4abcd3e8657bb965b10355597470032c487d08ee20c109fcd0e8bf34876f" exitCode=0 Feb 03 06:20:33 crc kubenswrapper[4872]: I0203 06:20:33.565887 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerDied","Data":"25bc4abcd3e8657bb965b10355597470032c487d08ee20c109fcd0e8bf34876f"} Feb 03 06:20:33 crc kubenswrapper[4872]: I0203 06:20:33.566926 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:33 crc kubenswrapper[4872]: I0203 06:20:33.567178 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:34 crc kubenswrapper[4872]: I0203 06:20:34.222861 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:20:34 crc kubenswrapper[4872]: I0203 06:20:34.313139 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57dc94599b-bvf7j" Feb 03 06:20:34 crc kubenswrapper[4872]: I0203 06:20:34.392835 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b48d58c48-rvvcb"] Feb 03 06:20:34 crc kubenswrapper[4872]: I0203 06:20:34.415259 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 06:20:34 crc kubenswrapper[4872]: I0203 06:20:34.430565 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 06:20:34 crc kubenswrapper[4872]: I0203 06:20:34.574094 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon-log" containerID="cri-o://555391f6e6812ef4d7e698df432a3cb13becfd2fc7c2de23c36f6678fe7447f6" gracePeriod=30 Feb 03 06:20:34 crc kubenswrapper[4872]: I0203 06:20:34.574659 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" containerID="cri-o://7bddaad10ce4887ebdf60346c63ab1b8d10079788fd64dfb77756bf3a7ddc99b" gracePeriod=30 Feb 03 06:20:35 crc kubenswrapper[4872]: I0203 06:20:35.580961 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:20:35 crc kubenswrapper[4872]: I0203 06:20:35.581001 4872 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 06:20:35 crc kubenswrapper[4872]: I0203 06:20:35.676341 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:35 crc kubenswrapper[4872]: I0203 06:20:35.681251 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 06:20:36 crc kubenswrapper[4872]: I0203 06:20:36.891279 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 03 06:20:38 crc kubenswrapper[4872]: I0203 06:20:38.608791 4872 generic.go:334] "Generic (PLEG): container finished" podID="ea66403d-a189-495c-9067-18571e929874" containerID="7bddaad10ce4887ebdf60346c63ab1b8d10079788fd64dfb77756bf3a7ddc99b" exitCode=0 Feb 03 06:20:38 crc kubenswrapper[4872]: I0203 06:20:38.608879 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b48d58c48-rvvcb" event={"ID":"ea66403d-a189-495c-9067-18571e929874","Type":"ContainerDied","Data":"7bddaad10ce4887ebdf60346c63ab1b8d10079788fd64dfb77756bf3a7ddc99b"} Feb 03 06:20:38 crc kubenswrapper[4872]: I0203 06:20:38.608940 4872 scope.go:117] "RemoveContainer" containerID="534ee65dd0168b081759c8c62509c1a653cf9038605da25240365de5b1fdcb4c" Feb 03 06:20:40 crc kubenswrapper[4872]: I0203 06:20:40.721840 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 06:20:50 crc kubenswrapper[4872]: I0203 06:20:50.721843 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 06:20:52 crc kubenswrapper[4872]: I0203 06:20:52.763562 4872 generic.go:334] "Generic (PLEG): container finished" podID="5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" containerID="3fbd1bcf2ec29bf66ac488f8c5866d938b3728cc6a13069ffe31a1b16de30844" exitCode=0 Feb 03 06:20:52 crc kubenswrapper[4872]: I0203 06:20:52.763965 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-z48z4" event={"ID":"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd","Type":"ContainerDied","Data":"3fbd1bcf2ec29bf66ac488f8c5866d938b3728cc6a13069ffe31a1b16de30844"} Feb 03 06:20:53 crc kubenswrapper[4872]: I0203 06:20:53.773019 4872 generic.go:334] "Generic (PLEG): container finished" podID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerID="674043476d243ae5a686485afd5c876110657d2e975fae0bc284b321e28ecf9b" exitCode=137 Feb 03 06:20:53 crc kubenswrapper[4872]: I0203 06:20:53.773090 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerDied","Data":"674043476d243ae5a686485afd5c876110657d2e975fae0bc284b321e28ecf9b"} Feb 03 06:20:53 crc kubenswrapper[4872]: I0203 06:20:53.773346 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f5c071e-77fc-41f2-ac51-a7bee2f13eab","Type":"ContainerDied","Data":"55c6d0bfe7bf970202a1329e71aaf3c1505d65e7ade26e3c018ec7dc11fa4e0d"} Feb 03 06:20:53 crc kubenswrapper[4872]: I0203 06:20:53.773359 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c6d0bfe7bf970202a1329e71aaf3c1505d65e7ade26e3c018ec7dc11fa4e0d" Feb 03 06:20:53 crc kubenswrapper[4872]: I0203 06:20:53.853075 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.053389 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-scripts\") pod \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.053790 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-config-data\") pod \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.053866 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-combined-ca-bundle\") pod \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.053934 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-log-httpd\") pod \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.054011 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-run-httpd\") pod \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.054031 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvm56\" (UniqueName: \"kubernetes.io/projected/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-kube-api-access-gvm56\") pod \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.054134 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-sg-core-conf-yaml\") pod \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\" (UID: \"0f5c071e-77fc-41f2-ac51-a7bee2f13eab\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.054772 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0f5c071e-77fc-41f2-ac51-a7bee2f13eab" (UID: "0f5c071e-77fc-41f2-ac51-a7bee2f13eab"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.054835 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0f5c071e-77fc-41f2-ac51-a7bee2f13eab" (UID: "0f5c071e-77fc-41f2-ac51-a7bee2f13eab"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.055232 4872 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.055253 4872 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.060454 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-scripts" (OuterVolumeSpecName: "scripts") pod "0f5c071e-77fc-41f2-ac51-a7bee2f13eab" (UID: "0f5c071e-77fc-41f2-ac51-a7bee2f13eab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.060915 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-kube-api-access-gvm56" (OuterVolumeSpecName: "kube-api-access-gvm56") pod "0f5c071e-77fc-41f2-ac51-a7bee2f13eab" (UID: "0f5c071e-77fc-41f2-ac51-a7bee2f13eab"). InnerVolumeSpecName "kube-api-access-gvm56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.092142 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0f5c071e-77fc-41f2-ac51-a7bee2f13eab" (UID: "0f5c071e-77fc-41f2-ac51-a7bee2f13eab"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.108312 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.137844 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f5c071e-77fc-41f2-ac51-a7bee2f13eab" (UID: "0f5c071e-77fc-41f2-ac51-a7bee2f13eab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.156265 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-scripts\") pod \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.156304 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-config-data\") pod \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.156359 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-combined-ca-bundle\") pod \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.156392 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f29g\" (UniqueName: \"kubernetes.io/projected/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-kube-api-access-7f29g\") pod \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\" (UID: \"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd\") " Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.156905 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.156923 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvm56\" (UniqueName: \"kubernetes.io/projected/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-kube-api-access-gvm56\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.156933 4872 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.156943 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.159967 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-scripts" (OuterVolumeSpecName: "scripts") pod "5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" (UID: "5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.164045 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-kube-api-access-7f29g" (OuterVolumeSpecName: "kube-api-access-7f29g") pod "5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" (UID: "5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd"). InnerVolumeSpecName "kube-api-access-7f29g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.165868 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-config-data" (OuterVolumeSpecName: "config-data") pod "0f5c071e-77fc-41f2-ac51-a7bee2f13eab" (UID: "0f5c071e-77fc-41f2-ac51-a7bee2f13eab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.185517 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-config-data" (OuterVolumeSpecName: "config-data") pod "5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" (UID: "5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.188762 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" (UID: "5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.258847 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f29g\" (UniqueName: \"kubernetes.io/projected/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-kube-api-access-7f29g\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.258911 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.258925 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.258940 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5c071e-77fc-41f2-ac51-a7bee2f13eab-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.258951 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.786099 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-z48z4" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.786095 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-z48z4" event={"ID":"5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd","Type":"ContainerDied","Data":"5b61fa1c66e4b68f6e3019aad1a7cad929f27145eabd325185428c0225853adb"} Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.786534 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b61fa1c66e4b68f6e3019aad1a7cad929f27145eabd325185428c0225853adb" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.786112 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.840910 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.847349 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.887539 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:54 crc kubenswrapper[4872]: E0203 06:20:54.888148 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" containerName="nova-cell0-conductor-db-sync" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888176 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" containerName="nova-cell0-conductor-db-sync" Feb 03 06:20:54 crc kubenswrapper[4872]: E0203 06:20:54.888204 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="proxy-httpd" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888217 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="proxy-httpd" Feb 03 06:20:54 crc kubenswrapper[4872]: E0203 06:20:54.888245 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="ceilometer-central-agent" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888260 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="ceilometer-central-agent" Feb 03 06:20:54 crc kubenswrapper[4872]: E0203 06:20:54.888286 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="sg-core" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888298 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="sg-core" Feb 03 06:20:54 crc kubenswrapper[4872]: E0203 06:20:54.888343 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="ceilometer-notification-agent" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888356 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="ceilometer-notification-agent" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888632 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="ceilometer-central-agent" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888662 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="sg-core" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888727 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="proxy-httpd" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888750 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" containerName="nova-cell0-conductor-db-sync" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.888776 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" containerName="ceilometer-notification-agent" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.891533 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.894432 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.894995 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.903424 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.923200 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.924541 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.933531 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.933782 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tz2lz" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.957573 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.972650 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898b1712-c38c-4438-9fd6-bc94e59b459e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.972723 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mst2r\" (UniqueName: \"kubernetes.io/projected/74254242-70b2-4a96-832c-b14d9469fb55-kube-api-access-mst2r\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.972743 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf2nh\" (UniqueName: \"kubernetes.io/projected/898b1712-c38c-4438-9fd6-bc94e59b459e-kube-api-access-wf2nh\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.972783 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.972847 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-run-httpd\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.972905 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-log-httpd\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.972984 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.973032 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898b1712-c38c-4438-9fd6-bc94e59b459e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.973067 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-scripts\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:54 crc kubenswrapper[4872]: I0203 06:20:54.973087 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-config-data\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.073888 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-config-data\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.073931 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898b1712-c38c-4438-9fd6-bc94e59b459e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.073964 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mst2r\" (UniqueName: \"kubernetes.io/projected/74254242-70b2-4a96-832c-b14d9469fb55-kube-api-access-mst2r\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.073985 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf2nh\" (UniqueName: \"kubernetes.io/projected/898b1712-c38c-4438-9fd6-bc94e59b459e-kube-api-access-wf2nh\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.074006 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.074037 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-run-httpd\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.074069 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-log-httpd\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.074175 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.074197 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898b1712-c38c-4438-9fd6-bc94e59b459e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.075104 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-scripts\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.074930 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-run-httpd\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.074747 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-log-httpd\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.079184 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.083143 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-scripts\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.091189 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898b1712-c38c-4438-9fd6-bc94e59b459e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.092176 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898b1712-c38c-4438-9fd6-bc94e59b459e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.092526 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.092520 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-config-data\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.098724 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf2nh\" (UniqueName: \"kubernetes.io/projected/898b1712-c38c-4438-9fd6-bc94e59b459e-kube-api-access-wf2nh\") pod \"nova-cell0-conductor-0\" (UID: \"898b1712-c38c-4438-9fd6-bc94e59b459e\") " pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.100556 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mst2r\" (UniqueName: \"kubernetes.io/projected/74254242-70b2-4a96-832c-b14d9469fb55-kube-api-access-mst2r\") pod \"ceilometer-0\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.210487 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.251306 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.682780 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:20:55 crc kubenswrapper[4872]: W0203 06:20:55.755898 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod898b1712_c38c_4438_9fd6_bc94e59b459e.slice/crio-dc8a4b74446bcf1d5709734aab2c98f9c04a255fd77348bdbcf4353c4627532b WatchSource:0}: Error finding container dc8a4b74446bcf1d5709734aab2c98f9c04a255fd77348bdbcf4353c4627532b: Status 404 returned error can't find the container with id dc8a4b74446bcf1d5709734aab2c98f9c04a255fd77348bdbcf4353c4627532b Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.759657 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.795902 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerStarted","Data":"51e5e04aa03303c02fccdd96d59f269b2ef378b9fafbd48c950d45615c857d2d"} Feb 03 06:20:55 crc kubenswrapper[4872]: I0203 06:20:55.800328 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"898b1712-c38c-4438-9fd6-bc94e59b459e","Type":"ContainerStarted","Data":"dc8a4b74446bcf1d5709734aab2c98f9c04a255fd77348bdbcf4353c4627532b"} Feb 03 06:20:56 crc kubenswrapper[4872]: I0203 06:20:56.135527 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5c071e-77fc-41f2-ac51-a7bee2f13eab" path="/var/lib/kubelet/pods/0f5c071e-77fc-41f2-ac51-a7bee2f13eab/volumes" Feb 03 06:20:56 crc kubenswrapper[4872]: I0203 06:20:56.810075 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerStarted","Data":"415114c15d51347effca4b0ccc8c18462409daa4200985fcc95cb0ce5e7cf3da"} Feb 03 06:20:56 crc kubenswrapper[4872]: I0203 06:20:56.812232 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"898b1712-c38c-4438-9fd6-bc94e59b459e","Type":"ContainerStarted","Data":"a3d2212bf1903bc65f86166fa8f7850bd993df1adf1bef8d69384711f9e1b814"} Feb 03 06:20:56 crc kubenswrapper[4872]: I0203 06:20:56.812940 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 03 06:20:56 crc kubenswrapper[4872]: I0203 06:20:56.859837 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.859814343 podStartE2EDuration="2.859814343s" podCreationTimestamp="2026-02-03 06:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:20:56.832441376 +0000 UTC m=+1227.415132790" watchObservedRunningTime="2026-02-03 06:20:56.859814343 +0000 UTC m=+1227.442505767" Feb 03 06:20:58 crc kubenswrapper[4872]: I0203 06:20:58.841607 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerStarted","Data":"a029b58c464fa2bbb44a6a4b0bd076717230c4008f3fff5fb17d49c7a5cd85ef"} Feb 03 06:20:59 crc kubenswrapper[4872]: I0203 06:20:59.856486 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerStarted","Data":"fc3b41323d4de808a2039ee877898cbea1c31ecca78c822b67d5f4ddb00638c5"} Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.283156 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.721327 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b48d58c48-rvvcb" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.721460 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.790124 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4k9hs"] Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.791199 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.795459 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.795474 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.808272 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4k9hs"] Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.937707 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-scripts\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.937761 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.937783 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-config-data\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.937943 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zspd\" (UniqueName: \"kubernetes.io/projected/b823238a-397a-4aba-9788-c9bbe361f24e-kube-api-access-6zspd\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.947416 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.949165 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.952037 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 06:21:00 crc kubenswrapper[4872]: I0203 06:21:00.983633 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.040925 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zspd\" (UniqueName: \"kubernetes.io/projected/b823238a-397a-4aba-9788-c9bbe361f24e-kube-api-access-6zspd\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.041205 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-config-data\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.041248 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-scripts\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.041283 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-logs\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.041302 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.041321 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-config-data\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.041371 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q7rc\" (UniqueName: \"kubernetes.io/projected/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-kube-api-access-6q7rc\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.041417 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.057543 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.064311 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-config-data\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.081045 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.082915 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.085456 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.091142 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-scripts\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.097936 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.124511 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zspd\" (UniqueName: \"kubernetes.io/projected/b823238a-397a-4aba-9788-c9bbe361f24e-kube-api-access-6zspd\") pod \"nova-cell0-cell-mapping-4k9hs\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.140071 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.141184 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.145453 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q7rc\" (UniqueName: \"kubernetes.io/projected/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-kube-api-access-6q7rc\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.145509 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-config-data\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.145537 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.145562 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.145590 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgt6h\" (UniqueName: \"kubernetes.io/projected/68f2e287-351b-43a2-85d7-9b94f66239f3-kube-api-access-vgt6h\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.145856 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.154781 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-config-data\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.154912 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-logs\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.154952 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f2e287-351b-43a2-85d7-9b94f66239f3-logs\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.161348 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-logs\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.162108 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.185019 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-config-data\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.213582 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.267132 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f2e287-351b-43a2-85d7-9b94f66239f3-logs\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.267182 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.267254 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgw9l\" (UniqueName: \"kubernetes.io/projected/fa342b9a-b292-4822-939d-201a76172c94-kube-api-access-wgw9l\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.267289 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-config-data\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.267314 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.267349 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgt6h\" (UniqueName: \"kubernetes.io/projected/68f2e287-351b-43a2-85d7-9b94f66239f3-kube-api-access-vgt6h\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.267436 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.267836 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f2e287-351b-43a2-85d7-9b94f66239f3-logs\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.276347 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.277181 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.277229 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.277745 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.282570 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.297307 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.305461 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-config-data\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.315498 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgt6h\" (UniqueName: \"kubernetes.io/projected/68f2e287-351b-43a2-85d7-9b94f66239f3-kube-api-access-vgt6h\") pod \"nova-metadata-0\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.363542 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.370520 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.370586 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.376135 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.378021 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgw9l\" (UniqueName: \"kubernetes.io/projected/fa342b9a-b292-4822-939d-201a76172c94-kube-api-access-wgw9l\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.378324 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-config-data\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.378348 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9js5p\" (UniqueName: \"kubernetes.io/projected/25e70206-4a21-4b3f-ab33-0d4f0edca409-kube-api-access-9js5p\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.378378 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.385389 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-c8lj5"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.387318 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.395232 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.403819 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgw9l\" (UniqueName: \"kubernetes.io/projected/fa342b9a-b292-4822-939d-201a76172c94-kube-api-access-wgw9l\") pod \"nova-cell1-novncproxy-0\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.404349 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q7rc\" (UniqueName: \"kubernetes.io/projected/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-kube-api-access-6q7rc\") pod \"nova-api-0\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.415838 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.425202 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-c8lj5"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480302 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480399 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480428 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-svc\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480460 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480484 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-config\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480548 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-config-data\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480568 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9js5p\" (UniqueName: \"kubernetes.io/projected/25e70206-4a21-4b3f-ab33-0d4f0edca409-kube-api-access-9js5p\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480589 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.480621 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrc4\" (UniqueName: \"kubernetes.io/projected/859d7045-14df-4211-bc27-60330376ee2a-kube-api-access-8rrc4\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.497185 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.497896 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9js5p\" (UniqueName: \"kubernetes.io/projected/25e70206-4a21-4b3f-ab33-0d4f0edca409-kube-api-access-9js5p\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.497920 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-config-data\") pod \"nova-scheduler-0\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.567108 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.580485 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.581814 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.581863 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-config\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.581959 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrc4\" (UniqueName: \"kubernetes.io/projected/859d7045-14df-4211-bc27-60330376ee2a-kube-api-access-8rrc4\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.582023 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.582055 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.582095 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-svc\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.583509 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-config\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.584053 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.584547 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.585260 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.583559 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-svc\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.607288 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrc4\" (UniqueName: \"kubernetes.io/projected/859d7045-14df-4211-bc27-60330376ee2a-kube-api-access-8rrc4\") pod \"dnsmasq-dns-bccf8f775-c8lj5\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.617408 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.680723 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.741085 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.941432 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerStarted","Data":"1aecb7fdcadd44568d4fff18aff8097658ea9e802bb459aaa6b6138b4d58c602"} Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.945911 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.962230 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4k9hs"] Feb 03 06:21:01 crc kubenswrapper[4872]: I0203 06:21:01.978268 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.57544744 podStartE2EDuration="7.978245959s" podCreationTimestamp="2026-02-03 06:20:54 +0000 UTC" firstStartedPulling="2026-02-03 06:20:55.696906858 +0000 UTC m=+1226.279598272" lastFinishedPulling="2026-02-03 06:21:01.099705377 +0000 UTC m=+1231.682396791" observedRunningTime="2026-02-03 06:21:01.97297858 +0000 UTC m=+1232.555669994" watchObservedRunningTime="2026-02-03 06:21:01.978245959 +0000 UTC m=+1232.560937363" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.287369 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.321929 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.591433 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.625035 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-c8lj5"] Feb 03 06:21:02 crc kubenswrapper[4872]: W0203 06:21:02.646208 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68f2e287_351b_43a2_85d7_9b94f66239f3.slice/crio-773193fefebe0bae184346e0ed0978d3ccb18c6d5926ebb43beda9e56412a7c8 WatchSource:0}: Error finding container 773193fefebe0bae184346e0ed0978d3ccb18c6d5926ebb43beda9e56412a7c8: Status 404 returned error can't find the container with id 773193fefebe0bae184346e0ed0978d3ccb18c6d5926ebb43beda9e56412a7c8 Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.649375 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.675971 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxmjw"] Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.677647 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.696343 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.696596 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.710782 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxmjw"] Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.831522 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-config-data\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.831571 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.831723 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxlpg\" (UniqueName: \"kubernetes.io/projected/30dc4098-27fd-4b55-a9bc-a66d92186ea6-kube-api-access-jxlpg\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.831754 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-scripts\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.932915 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxlpg\" (UniqueName: \"kubernetes.io/projected/30dc4098-27fd-4b55-a9bc-a66d92186ea6-kube-api-access-jxlpg\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.933187 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-scripts\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.934594 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-config-data\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.934654 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.941603 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-config-data\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.942508 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-scripts\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.944576 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.955806 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxlpg\" (UniqueName: \"kubernetes.io/projected/30dc4098-27fd-4b55-a9bc-a66d92186ea6-kube-api-access-jxlpg\") pod \"nova-cell1-conductor-db-sync-pxmjw\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.959956 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4k9hs" event={"ID":"b823238a-397a-4aba-9788-c9bbe361f24e","Type":"ContainerStarted","Data":"3bbdf759550d819d5edf05e39c5676c21a3da212c21424f95d80c4f66e7b660c"} Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.960003 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4k9hs" event={"ID":"b823238a-397a-4aba-9788-c9bbe361f24e","Type":"ContainerStarted","Data":"fe40a826bc266a2180bbecd9eba14f08550c3f599cc00024a26213b8835d462e"} Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.968226 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38","Type":"ContainerStarted","Data":"fb24d7f14703e3c828ed3475095d7caeef3111aadebf7c2395868d672ef9c026"} Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.973177 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f2e287-351b-43a2-85d7-9b94f66239f3","Type":"ContainerStarted","Data":"773193fefebe0bae184346e0ed0978d3ccb18c6d5926ebb43beda9e56412a7c8"} Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.975187 4872 generic.go:334] "Generic (PLEG): container finished" podID="859d7045-14df-4211-bc27-60330376ee2a" containerID="67b8cb200dbc6895c28eff9022330d5ed2c3cbb0d565dca14fb9363dc000dfda" exitCode=0 Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.975232 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" event={"ID":"859d7045-14df-4211-bc27-60330376ee2a","Type":"ContainerDied","Data":"67b8cb200dbc6895c28eff9022330d5ed2c3cbb0d565dca14fb9363dc000dfda"} Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.975249 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" event={"ID":"859d7045-14df-4211-bc27-60330376ee2a","Type":"ContainerStarted","Data":"e19b3cecdfe5d98230aae93d017bcce31af34366574e04e09e0071299b46d604"} Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.979757 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa342b9a-b292-4822-939d-201a76172c94","Type":"ContainerStarted","Data":"637c21fed12b330ab40e35885ed5993650bd9a14a0fd5fcafdd76fd0888efe56"} Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.989792 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4k9hs" podStartSLOduration=2.989770169 podStartE2EDuration="2.989770169s" podCreationTimestamp="2026-02-03 06:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:02.979409586 +0000 UTC m=+1233.562101000" watchObservedRunningTime="2026-02-03 06:21:02.989770169 +0000 UTC m=+1233.572461583" Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.992177 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"25e70206-4a21-4b3f-ab33-0d4f0edca409","Type":"ContainerStarted","Data":"93fe03fb508dfb9d6b758007a0963ee1a3e3777cb5ecdcb0639f91681d3861a1"} Feb 03 06:21:02 crc kubenswrapper[4872]: I0203 06:21:02.997759 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:03 crc kubenswrapper[4872]: I0203 06:21:03.683383 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxmjw"] Feb 03 06:21:04 crc kubenswrapper[4872]: I0203 06:21:04.024367 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" event={"ID":"859d7045-14df-4211-bc27-60330376ee2a","Type":"ContainerStarted","Data":"7378c0e4a6ee9b1950497e1f78ec1f774224cb02249d460a872f1db8b4ddd4db"} Feb 03 06:21:04 crc kubenswrapper[4872]: I0203 06:21:04.025706 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:04 crc kubenswrapper[4872]: I0203 06:21:04.032459 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" event={"ID":"30dc4098-27fd-4b55-a9bc-a66d92186ea6","Type":"ContainerStarted","Data":"d1af0c5c20e54230f6f2b76d0f914201026f4355122f78801db2becba6dd7275"} Feb 03 06:21:04 crc kubenswrapper[4872]: I0203 06:21:04.032517 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" event={"ID":"30dc4098-27fd-4b55-a9bc-a66d92186ea6","Type":"ContainerStarted","Data":"a2f8b47ca5787542c8d5e853cfd8fa46e730b02d868a45e485dc6f3aaa95c571"} Feb 03 06:21:04 crc kubenswrapper[4872]: I0203 06:21:04.056004 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" podStartSLOduration=3.055988669 podStartE2EDuration="3.055988669s" podCreationTimestamp="2026-02-03 06:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:04.045987845 +0000 UTC m=+1234.628679259" watchObservedRunningTime="2026-02-03 06:21:04.055988669 +0000 UTC m=+1234.638680083" Feb 03 06:21:04 crc kubenswrapper[4872]: I0203 06:21:04.064998 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" podStartSLOduration=2.064987298 podStartE2EDuration="2.064987298s" podCreationTimestamp="2026-02-03 06:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:04.063921202 +0000 UTC m=+1234.646612616" watchObservedRunningTime="2026-02-03 06:21:04.064987298 +0000 UTC m=+1234.647678702" Feb 03 06:21:04 crc kubenswrapper[4872]: I0203 06:21:04.907524 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:04 crc kubenswrapper[4872]: I0203 06:21:04.914707 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:05 crc kubenswrapper[4872]: I0203 06:21:05.042189 4872 generic.go:334] "Generic (PLEG): container finished" podID="ea66403d-a189-495c-9067-18571e929874" containerID="555391f6e6812ef4d7e698df432a3cb13becfd2fc7c2de23c36f6678fe7447f6" exitCode=137 Feb 03 06:21:05 crc kubenswrapper[4872]: I0203 06:21:05.042349 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b48d58c48-rvvcb" event={"ID":"ea66403d-a189-495c-9067-18571e929874","Type":"ContainerDied","Data":"555391f6e6812ef4d7e698df432a3cb13becfd2fc7c2de23c36f6678fe7447f6"} Feb 03 06:21:06 crc kubenswrapper[4872]: I0203 06:21:06.961755 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.020241 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-combined-ca-bundle\") pod \"ea66403d-a189-495c-9067-18571e929874\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.020336 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea66403d-a189-495c-9067-18571e929874-logs\") pod \"ea66403d-a189-495c-9067-18571e929874\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.020367 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-config-data\") pod \"ea66403d-a189-495c-9067-18571e929874\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.020455 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgsjt\" (UniqueName: \"kubernetes.io/projected/ea66403d-a189-495c-9067-18571e929874-kube-api-access-dgsjt\") pod \"ea66403d-a189-495c-9067-18571e929874\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.020498 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-scripts\") pod \"ea66403d-a189-495c-9067-18571e929874\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.020516 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-secret-key\") pod \"ea66403d-a189-495c-9067-18571e929874\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.020570 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-tls-certs\") pod \"ea66403d-a189-495c-9067-18571e929874\" (UID: \"ea66403d-a189-495c-9067-18571e929874\") " Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.021096 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea66403d-a189-495c-9067-18571e929874-logs" (OuterVolumeSpecName: "logs") pod "ea66403d-a189-495c-9067-18571e929874" (UID: "ea66403d-a189-495c-9067-18571e929874"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.029464 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea66403d-a189-495c-9067-18571e929874-kube-api-access-dgsjt" (OuterVolumeSpecName: "kube-api-access-dgsjt") pod "ea66403d-a189-495c-9067-18571e929874" (UID: "ea66403d-a189-495c-9067-18571e929874"). InnerVolumeSpecName "kube-api-access-dgsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.038603 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ea66403d-a189-495c-9067-18571e929874" (UID: "ea66403d-a189-495c-9067-18571e929874"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.077831 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-scripts" (OuterVolumeSpecName: "scripts") pod "ea66403d-a189-495c-9067-18571e929874" (UID: "ea66403d-a189-495c-9067-18571e929874"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.089357 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b48d58c48-rvvcb" event={"ID":"ea66403d-a189-495c-9067-18571e929874","Type":"ContainerDied","Data":"b866d9fc6c5d934b5d2595150c2a4394b57c67415e70621779928a6a8aeaf6db"} Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.089411 4872 scope.go:117] "RemoveContainer" containerID="7bddaad10ce4887ebdf60346c63ab1b8d10079788fd64dfb77756bf3a7ddc99b" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.089567 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b48d58c48-rvvcb" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.101790 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea66403d-a189-495c-9067-18571e929874" (UID: "ea66403d-a189-495c-9067-18571e929874"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.122631 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.122655 4872 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.122664 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.122672 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea66403d-a189-495c-9067-18571e929874-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.122679 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgsjt\" (UniqueName: \"kubernetes.io/projected/ea66403d-a189-495c-9067-18571e929874-kube-api-access-dgsjt\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.123736 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-config-data" (OuterVolumeSpecName: "config-data") pod "ea66403d-a189-495c-9067-18571e929874" (UID: "ea66403d-a189-495c-9067-18571e929874"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.135382 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "ea66403d-a189-495c-9067-18571e929874" (UID: "ea66403d-a189-495c-9067-18571e929874"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.225115 4872 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea66403d-a189-495c-9067-18571e929874-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.225151 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea66403d-a189-495c-9067-18571e929874-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.446223 4872 scope.go:117] "RemoveContainer" containerID="555391f6e6812ef4d7e698df432a3cb13becfd2fc7c2de23c36f6678fe7447f6" Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.456181 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b48d58c48-rvvcb"] Feb 03 06:21:07 crc kubenswrapper[4872]: I0203 06:21:07.469628 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b48d58c48-rvvcb"] Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.101738 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa342b9a-b292-4822-939d-201a76172c94","Type":"ContainerStarted","Data":"bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848"} Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.102905 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fa342b9a-b292-4822-939d-201a76172c94" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848" gracePeriod=30 Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.107817 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"25e70206-4a21-4b3f-ab33-0d4f0edca409","Type":"ContainerStarted","Data":"39a8f0d683ea2edaccc66e97023c6056b5cdd318c4d26f4a84af9b7cf24cbfb7"} Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.110165 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38","Type":"ContainerStarted","Data":"fc7075ba68a08dad63898a1ead4ed1960338df4f69b3cc422563e1e4b594fb36"} Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.110355 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38","Type":"ContainerStarted","Data":"36845dcd1f9cf364e47ec09d3035dbbdc5a3906b844c0e1c2b251f3586d3994c"} Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.111845 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f2e287-351b-43a2-85d7-9b94f66239f3","Type":"ContainerStarted","Data":"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3"} Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.111877 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f2e287-351b-43a2-85d7-9b94f66239f3","Type":"ContainerStarted","Data":"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5"} Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.111917 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerName="nova-metadata-log" containerID="cri-o://375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5" gracePeriod=30 Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.111929 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerName="nova-metadata-metadata" containerID="cri-o://4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3" gracePeriod=30 Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.135548 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea66403d-a189-495c-9067-18571e929874" path="/var/lib/kubelet/pods/ea66403d-a189-495c-9067-18571e929874/volumes" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.139975 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.5748547779999997 podStartE2EDuration="7.139964918s" podCreationTimestamp="2026-02-03 06:21:01 +0000 UTC" firstStartedPulling="2026-02-03 06:21:02.295783857 +0000 UTC m=+1232.878475271" lastFinishedPulling="2026-02-03 06:21:06.860893997 +0000 UTC m=+1237.443585411" observedRunningTime="2026-02-03 06:21:08.135870518 +0000 UTC m=+1238.718561932" watchObservedRunningTime="2026-02-03 06:21:08.139964918 +0000 UTC m=+1238.722656332" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.160331 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.5302655830000003 podStartE2EDuration="8.160314174s" podCreationTimestamp="2026-02-03 06:21:00 +0000 UTC" firstStartedPulling="2026-02-03 06:21:02.324054035 +0000 UTC m=+1232.906745439" lastFinishedPulling="2026-02-03 06:21:06.954102616 +0000 UTC m=+1237.536794030" observedRunningTime="2026-02-03 06:21:08.155143258 +0000 UTC m=+1238.737834672" watchObservedRunningTime="2026-02-03 06:21:08.160314174 +0000 UTC m=+1238.743005608" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.197509 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.919121728 podStartE2EDuration="7.197492058s" podCreationTimestamp="2026-02-03 06:21:01 +0000 UTC" firstStartedPulling="2026-02-03 06:21:02.648197485 +0000 UTC m=+1233.230888899" lastFinishedPulling="2026-02-03 06:21:06.926567795 +0000 UTC m=+1237.509259229" observedRunningTime="2026-02-03 06:21:08.183213571 +0000 UTC m=+1238.765904985" watchObservedRunningTime="2026-02-03 06:21:08.197492058 +0000 UTC m=+1238.780183472" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.661426 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.683055 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.415144229 podStartE2EDuration="7.683035866s" podCreationTimestamp="2026-02-03 06:21:01 +0000 UTC" firstStartedPulling="2026-02-03 06:21:02.588945442 +0000 UTC m=+1233.171636856" lastFinishedPulling="2026-02-03 06:21:06.856837069 +0000 UTC m=+1237.439528493" observedRunningTime="2026-02-03 06:21:08.202010808 +0000 UTC m=+1238.784702232" watchObservedRunningTime="2026-02-03 06:21:08.683035866 +0000 UTC m=+1239.265727290" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.764203 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-combined-ca-bundle\") pod \"68f2e287-351b-43a2-85d7-9b94f66239f3\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.764306 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f2e287-351b-43a2-85d7-9b94f66239f3-logs\") pod \"68f2e287-351b-43a2-85d7-9b94f66239f3\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.764506 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgt6h\" (UniqueName: \"kubernetes.io/projected/68f2e287-351b-43a2-85d7-9b94f66239f3-kube-api-access-vgt6h\") pod \"68f2e287-351b-43a2-85d7-9b94f66239f3\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.764546 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-config-data\") pod \"68f2e287-351b-43a2-85d7-9b94f66239f3\" (UID: \"68f2e287-351b-43a2-85d7-9b94f66239f3\") " Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.765140 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68f2e287-351b-43a2-85d7-9b94f66239f3-logs" (OuterVolumeSpecName: "logs") pod "68f2e287-351b-43a2-85d7-9b94f66239f3" (UID: "68f2e287-351b-43a2-85d7-9b94f66239f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.768958 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f2e287-351b-43a2-85d7-9b94f66239f3-kube-api-access-vgt6h" (OuterVolumeSpecName: "kube-api-access-vgt6h") pod "68f2e287-351b-43a2-85d7-9b94f66239f3" (UID: "68f2e287-351b-43a2-85d7-9b94f66239f3"). InnerVolumeSpecName "kube-api-access-vgt6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.791497 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-config-data" (OuterVolumeSpecName: "config-data") pod "68f2e287-351b-43a2-85d7-9b94f66239f3" (UID: "68f2e287-351b-43a2-85d7-9b94f66239f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.796850 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68f2e287-351b-43a2-85d7-9b94f66239f3" (UID: "68f2e287-351b-43a2-85d7-9b94f66239f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.867386 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgt6h\" (UniqueName: \"kubernetes.io/projected/68f2e287-351b-43a2-85d7-9b94f66239f3-kube-api-access-vgt6h\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.867424 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.867439 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f2e287-351b-43a2-85d7-9b94f66239f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:08 crc kubenswrapper[4872]: I0203 06:21:08.867449 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f2e287-351b-43a2-85d7-9b94f66239f3-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.141472 4872 generic.go:334] "Generic (PLEG): container finished" podID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerID="4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3" exitCode=0 Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.141502 4872 generic.go:334] "Generic (PLEG): container finished" podID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerID="375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5" exitCode=143 Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.141930 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.142477 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f2e287-351b-43a2-85d7-9b94f66239f3","Type":"ContainerDied","Data":"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3"} Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.142506 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f2e287-351b-43a2-85d7-9b94f66239f3","Type":"ContainerDied","Data":"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5"} Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.142515 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68f2e287-351b-43a2-85d7-9b94f66239f3","Type":"ContainerDied","Data":"773193fefebe0bae184346e0ed0978d3ccb18c6d5926ebb43beda9e56412a7c8"} Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.142530 4872 scope.go:117] "RemoveContainer" containerID="4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.250656 4872 scope.go:117] "RemoveContainer" containerID="375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.259018 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.282925 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.312726 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:09 crc kubenswrapper[4872]: E0203 06:21:09.313114 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313129 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" Feb 03 06:21:09 crc kubenswrapper[4872]: E0203 06:21:09.313143 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313148 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" Feb 03 06:21:09 crc kubenswrapper[4872]: E0203 06:21:09.313166 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon-log" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313173 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon-log" Feb 03 06:21:09 crc kubenswrapper[4872]: E0203 06:21:09.313194 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerName="nova-metadata-log" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313200 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerName="nova-metadata-log" Feb 03 06:21:09 crc kubenswrapper[4872]: E0203 06:21:09.313211 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerName="nova-metadata-metadata" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313216 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerName="nova-metadata-metadata" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313369 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon-log" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313384 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerName="nova-metadata-log" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313395 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313406 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" containerName="nova-metadata-metadata" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.313740 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea66403d-a189-495c-9067-18571e929874" containerName="horizon" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.314367 4872 scope.go:117] "RemoveContainer" containerID="4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.314485 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: E0203 06:21:09.316213 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3\": container with ID starting with 4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3 not found: ID does not exist" containerID="4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.316243 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3"} err="failed to get container status \"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3\": rpc error: code = NotFound desc = could not find container \"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3\": container with ID starting with 4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3 not found: ID does not exist" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.316264 4872 scope.go:117] "RemoveContainer" containerID="375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5" Feb 03 06:21:09 crc kubenswrapper[4872]: E0203 06:21:09.316447 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5\": container with ID starting with 375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5 not found: ID does not exist" containerID="375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.316468 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5"} err="failed to get container status \"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5\": rpc error: code = NotFound desc = could not find container \"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5\": container with ID starting with 375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5 not found: ID does not exist" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.316482 4872 scope.go:117] "RemoveContainer" containerID="4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.316638 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3"} err="failed to get container status \"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3\": rpc error: code = NotFound desc = could not find container \"4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3\": container with ID starting with 4245215e75a2da56aa5ce25a1d07f7ed88e96b2b403606e98b958a265a3bc9b3 not found: ID does not exist" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.316660 4872 scope.go:117] "RemoveContainer" containerID="375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.316847 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5"} err="failed to get container status \"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5\": rpc error: code = NotFound desc = could not find container \"375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5\": container with ID starting with 375203c01a7df1b807518cef43a1b401d2fab6d9d62472d7cea4a455854aeee5 not found: ID does not exist" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.320622 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.325806 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.329638 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.379825 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-config-data\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.380080 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-logs\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.380226 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.380334 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.380421 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jrt\" (UniqueName: \"kubernetes.io/projected/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-kube-api-access-n9jrt\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.482552 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-config-data\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.482630 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-logs\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.482801 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.482869 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.482900 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jrt\" (UniqueName: \"kubernetes.io/projected/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-kube-api-access-n9jrt\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.484341 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-logs\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.488478 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.489735 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-config-data\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.492421 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.498273 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jrt\" (UniqueName: \"kubernetes.io/projected/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-kube-api-access-n9jrt\") pod \"nova-metadata-0\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " pod="openstack/nova-metadata-0" Feb 03 06:21:09 crc kubenswrapper[4872]: I0203 06:21:09.638332 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:10 crc kubenswrapper[4872]: I0203 06:21:10.179144 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f2e287-351b-43a2-85d7-9b94f66239f3" path="/var/lib/kubelet/pods/68f2e287-351b-43a2-85d7-9b94f66239f3/volumes" Feb 03 06:21:10 crc kubenswrapper[4872]: I0203 06:21:10.204613 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.211888 4872 generic.go:334] "Generic (PLEG): container finished" podID="b823238a-397a-4aba-9788-c9bbe361f24e" containerID="3bbdf759550d819d5edf05e39c5676c21a3da212c21424f95d80c4f66e7b660c" exitCode=0 Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.212000 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4k9hs" event={"ID":"b823238a-397a-4aba-9788-c9bbe361f24e","Type":"ContainerDied","Data":"3bbdf759550d819d5edf05e39c5676c21a3da212c21424f95d80c4f66e7b660c"} Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.218274 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab","Type":"ContainerStarted","Data":"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123"} Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.218479 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab","Type":"ContainerStarted","Data":"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3"} Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.218613 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab","Type":"ContainerStarted","Data":"2fe8bafef38e8cacdba0afeccb7bc6ce5c05742302785c9c5772705b7bcfb5e9"} Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.256107 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.256084002 podStartE2EDuration="2.256084002s" podCreationTimestamp="2026-02-03 06:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:11.254207095 +0000 UTC m=+1241.836898509" watchObservedRunningTime="2026-02-03 06:21:11.256084002 +0000 UTC m=+1241.838775436" Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.582964 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.583006 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.618665 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.681661 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.681855 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.714584 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.743736 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.806421 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mgwfc"] Feb 03 06:21:11 crc kubenswrapper[4872]: I0203 06:21:11.806788 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" podUID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" containerName="dnsmasq-dns" containerID="cri-o://79b34c2e7fad425d1840ba85b1619afeb5c812d69861a010608d1e622f1a5fd5" gracePeriod=10 Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.240054 4872 generic.go:334] "Generic (PLEG): container finished" podID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" containerID="79b34c2e7fad425d1840ba85b1619afeb5c812d69861a010608d1e622f1a5fd5" exitCode=0 Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.240479 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" event={"ID":"b172b5dc-48c4-43ce-a949-7dec7d0e7567","Type":"ContainerDied","Data":"79b34c2e7fad425d1840ba85b1619afeb5c812d69861a010608d1e622f1a5fd5"} Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.342643 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.422918 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.606591 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dtgb\" (UniqueName: \"kubernetes.io/projected/b172b5dc-48c4-43ce-a949-7dec7d0e7567-kube-api-access-7dtgb\") pod \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.606880 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-svc\") pod \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.607053 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-config\") pod \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.607119 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-swift-storage-0\") pod \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.607170 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-nb\") pod \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.607205 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-sb\") pod \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\" (UID: \"b172b5dc-48c4-43ce-a949-7dec7d0e7567\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.615871 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b172b5dc-48c4-43ce-a949-7dec7d0e7567-kube-api-access-7dtgb" (OuterVolumeSpecName: "kube-api-access-7dtgb") pod "b172b5dc-48c4-43ce-a949-7dec7d0e7567" (UID: "b172b5dc-48c4-43ce-a949-7dec7d0e7567"). InnerVolumeSpecName "kube-api-access-7dtgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.666974 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.667083 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.710932 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dtgb\" (UniqueName: \"kubernetes.io/projected/b172b5dc-48c4-43ce-a949-7dec7d0e7567-kube-api-access-7dtgb\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.712353 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b172b5dc-48c4-43ce-a949-7dec7d0e7567" (UID: "b172b5dc-48c4-43ce-a949-7dec7d0e7567"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.721211 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-config" (OuterVolumeSpecName: "config") pod "b172b5dc-48c4-43ce-a949-7dec7d0e7567" (UID: "b172b5dc-48c4-43ce-a949-7dec7d0e7567"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.726082 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b172b5dc-48c4-43ce-a949-7dec7d0e7567" (UID: "b172b5dc-48c4-43ce-a949-7dec7d0e7567"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.730145 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b172b5dc-48c4-43ce-a949-7dec7d0e7567" (UID: "b172b5dc-48c4-43ce-a949-7dec7d0e7567"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.740050 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b172b5dc-48c4-43ce-a949-7dec7d0e7567" (UID: "b172b5dc-48c4-43ce-a949-7dec7d0e7567"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.782826 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.812747 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.812781 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.812792 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.812802 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.812813 4872 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b172b5dc-48c4-43ce-a949-7dec7d0e7567-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.913820 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-scripts\") pod \"b823238a-397a-4aba-9788-c9bbe361f24e\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.913888 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zspd\" (UniqueName: \"kubernetes.io/projected/b823238a-397a-4aba-9788-c9bbe361f24e-kube-api-access-6zspd\") pod \"b823238a-397a-4aba-9788-c9bbe361f24e\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.913931 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-combined-ca-bundle\") pod \"b823238a-397a-4aba-9788-c9bbe361f24e\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.914055 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-config-data\") pod \"b823238a-397a-4aba-9788-c9bbe361f24e\" (UID: \"b823238a-397a-4aba-9788-c9bbe361f24e\") " Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.920173 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b823238a-397a-4aba-9788-c9bbe361f24e-kube-api-access-6zspd" (OuterVolumeSpecName: "kube-api-access-6zspd") pod "b823238a-397a-4aba-9788-c9bbe361f24e" (UID: "b823238a-397a-4aba-9788-c9bbe361f24e"). InnerVolumeSpecName "kube-api-access-6zspd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.922788 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-scripts" (OuterVolumeSpecName: "scripts") pod "b823238a-397a-4aba-9788-c9bbe361f24e" (UID: "b823238a-397a-4aba-9788-c9bbe361f24e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.951085 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-config-data" (OuterVolumeSpecName: "config-data") pod "b823238a-397a-4aba-9788-c9bbe361f24e" (UID: "b823238a-397a-4aba-9788-c9bbe361f24e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:12 crc kubenswrapper[4872]: I0203 06:21:12.952522 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b823238a-397a-4aba-9788-c9bbe361f24e" (UID: "b823238a-397a-4aba-9788-c9bbe361f24e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.016626 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.016656 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zspd\" (UniqueName: \"kubernetes.io/projected/b823238a-397a-4aba-9788-c9bbe361f24e-kube-api-access-6zspd\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.016667 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.016678 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b823238a-397a-4aba-9788-c9bbe361f24e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.250805 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4k9hs" event={"ID":"b823238a-397a-4aba-9788-c9bbe361f24e","Type":"ContainerDied","Data":"fe40a826bc266a2180bbecd9eba14f08550c3f599cc00024a26213b8835d462e"} Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.250845 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe40a826bc266a2180bbecd9eba14f08550c3f599cc00024a26213b8835d462e" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.250887 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4k9hs" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.259489 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" event={"ID":"b172b5dc-48c4-43ce-a949-7dec7d0e7567","Type":"ContainerDied","Data":"def04805f95880b9a9e777f8edaa7c8d0dc226cd9d32b21989bc25beb171b7ee"} Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.259543 4872 scope.go:117] "RemoveContainer" containerID="79b34c2e7fad425d1840ba85b1619afeb5c812d69861a010608d1e622f1a5fd5" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.259666 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mgwfc" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.281729 4872 generic.go:334] "Generic (PLEG): container finished" podID="30dc4098-27fd-4b55-a9bc-a66d92186ea6" containerID="d1af0c5c20e54230f6f2b76d0f914201026f4355122f78801db2becba6dd7275" exitCode=0 Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.282602 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" event={"ID":"30dc4098-27fd-4b55-a9bc-a66d92186ea6","Type":"ContainerDied","Data":"d1af0c5c20e54230f6f2b76d0f914201026f4355122f78801db2becba6dd7275"} Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.349880 4872 scope.go:117] "RemoveContainer" containerID="5117d81e75302041fdae2a1efd1eae7732a9e81c62cafefb06676cd851ee9ee5" Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.373703 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mgwfc"] Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.383261 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mgwfc"] Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.471319 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.487440 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.487728 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-log" containerID="cri-o://36845dcd1f9cf364e47ec09d3035dbbdc5a3906b844c0e1c2b251f3586d3994c" gracePeriod=30 Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.487890 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-api" containerID="cri-o://fc7075ba68a08dad63898a1ead4ed1960338df4f69b3cc422563e1e4b594fb36" gracePeriod=30 Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.507611 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.508033 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerName="nova-metadata-metadata" containerID="cri-o://9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123" gracePeriod=30 Feb 03 06:21:13 crc kubenswrapper[4872]: I0203 06:21:13.507909 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerName="nova-metadata-log" containerID="cri-o://f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3" gracePeriod=30 Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.010200 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.132335 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" path="/var/lib/kubelet/pods/b172b5dc-48c4-43ce-a949-7dec7d0e7567/volumes" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.147900 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-combined-ca-bundle\") pod \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.148005 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-config-data\") pod \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.148030 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-logs\") pod \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.148092 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9jrt\" (UniqueName: \"kubernetes.io/projected/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-kube-api-access-n9jrt\") pod \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.148112 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-nova-metadata-tls-certs\") pod \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\" (UID: \"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.149500 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-logs" (OuterVolumeSpecName: "logs") pod "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" (UID: "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.174043 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-kube-api-access-n9jrt" (OuterVolumeSpecName: "kube-api-access-n9jrt") pod "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" (UID: "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab"). InnerVolumeSpecName "kube-api-access-n9jrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.178763 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-config-data" (OuterVolumeSpecName: "config-data") pod "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" (UID: "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.185404 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" (UID: "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.218322 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" (UID: "5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.251498 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9jrt\" (UniqueName: \"kubernetes.io/projected/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-kube-api-access-n9jrt\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.251524 4872 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.251534 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.251544 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.251552 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.292813 4872 generic.go:334] "Generic (PLEG): container finished" podID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerID="9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123" exitCode=0 Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.292841 4872 generic.go:334] "Generic (PLEG): container finished" podID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerID="f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3" exitCode=143 Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.292871 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab","Type":"ContainerDied","Data":"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123"} Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.292894 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab","Type":"ContainerDied","Data":"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3"} Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.292903 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab","Type":"ContainerDied","Data":"2fe8bafef38e8cacdba0afeccb7bc6ce5c05742302785c9c5772705b7bcfb5e9"} Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.292917 4872 scope.go:117] "RemoveContainer" containerID="9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.292998 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.300315 4872 generic.go:334] "Generic (PLEG): container finished" podID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerID="36845dcd1f9cf364e47ec09d3035dbbdc5a3906b844c0e1c2b251f3586d3994c" exitCode=143 Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.300355 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38","Type":"ContainerDied","Data":"36845dcd1f9cf364e47ec09d3035dbbdc5a3906b844c0e1c2b251f3586d3994c"} Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.350583 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.358743 4872 scope.go:117] "RemoveContainer" containerID="f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.366400 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.374575 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:14 crc kubenswrapper[4872]: E0203 06:21:14.375230 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b823238a-397a-4aba-9788-c9bbe361f24e" containerName="nova-manage" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.375310 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b823238a-397a-4aba-9788-c9bbe361f24e" containerName="nova-manage" Feb 03 06:21:14 crc kubenswrapper[4872]: E0203 06:21:14.375382 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerName="nova-metadata-metadata" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.375447 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerName="nova-metadata-metadata" Feb 03 06:21:14 crc kubenswrapper[4872]: E0203 06:21:14.375511 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" containerName="dnsmasq-dns" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.375578 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" containerName="dnsmasq-dns" Feb 03 06:21:14 crc kubenswrapper[4872]: E0203 06:21:14.375642 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" containerName="init" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.375713 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" containerName="init" Feb 03 06:21:14 crc kubenswrapper[4872]: E0203 06:21:14.375784 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerName="nova-metadata-log" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.375854 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerName="nova-metadata-log" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.376144 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b172b5dc-48c4-43ce-a949-7dec7d0e7567" containerName="dnsmasq-dns" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.376222 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b823238a-397a-4aba-9788-c9bbe361f24e" containerName="nova-manage" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.376294 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerName="nova-metadata-metadata" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.376370 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" containerName="nova-metadata-log" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.378353 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.381513 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.381654 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.391524 4872 scope.go:117] "RemoveContainer" containerID="9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123" Feb 03 06:21:14 crc kubenswrapper[4872]: E0203 06:21:14.392242 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123\": container with ID starting with 9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123 not found: ID does not exist" containerID="9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.392275 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123"} err="failed to get container status \"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123\": rpc error: code = NotFound desc = could not find container \"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123\": container with ID starting with 9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123 not found: ID does not exist" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.392299 4872 scope.go:117] "RemoveContainer" containerID="f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3" Feb 03 06:21:14 crc kubenswrapper[4872]: E0203 06:21:14.392578 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3\": container with ID starting with f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3 not found: ID does not exist" containerID="f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.392607 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3"} err="failed to get container status \"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3\": rpc error: code = NotFound desc = could not find container \"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3\": container with ID starting with f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3 not found: ID does not exist" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.392628 4872 scope.go:117] "RemoveContainer" containerID="9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.392768 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.393183 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123"} err="failed to get container status \"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123\": rpc error: code = NotFound desc = could not find container \"9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123\": container with ID starting with 9d07073300dfbf405a88f98a3e2883a8e882e1db8745456334b7b24d16c61123 not found: ID does not exist" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.393297 4872 scope.go:117] "RemoveContainer" containerID="f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.393590 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3"} err="failed to get container status \"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3\": rpc error: code = NotFound desc = could not find container \"f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3\": container with ID starting with f393b3a4d4b45b4b7e270de75eb076613a740387d4ff32ada3c91864641974c3 not found: ID does not exist" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.555914 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n9s8\" (UniqueName: \"kubernetes.io/projected/f0bb52bd-b765-49d2-908e-38755908e575-kube-api-access-2n9s8\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.555958 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-config-data\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.555989 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.556055 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb52bd-b765-49d2-908e-38755908e575-logs\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.556159 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.657682 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n9s8\" (UniqueName: \"kubernetes.io/projected/f0bb52bd-b765-49d2-908e-38755908e575-kube-api-access-2n9s8\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.657748 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-config-data\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.657782 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.657802 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb52bd-b765-49d2-908e-38755908e575-logs\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.657876 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.658417 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb52bd-b765-49d2-908e-38755908e575-logs\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.662110 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.662348 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.662583 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-config-data\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.677230 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n9s8\" (UniqueName: \"kubernetes.io/projected/f0bb52bd-b765-49d2-908e-38755908e575-kube-api-access-2n9s8\") pod \"nova-metadata-0\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.714622 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.731232 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.866980 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-scripts\") pod \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.867213 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-combined-ca-bundle\") pod \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.867399 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-config-data\") pod \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.867465 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxlpg\" (UniqueName: \"kubernetes.io/projected/30dc4098-27fd-4b55-a9bc-a66d92186ea6-kube-api-access-jxlpg\") pod \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\" (UID: \"30dc4098-27fd-4b55-a9bc-a66d92186ea6\") " Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.872502 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-scripts" (OuterVolumeSpecName: "scripts") pod "30dc4098-27fd-4b55-a9bc-a66d92186ea6" (UID: "30dc4098-27fd-4b55-a9bc-a66d92186ea6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.882236 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30dc4098-27fd-4b55-a9bc-a66d92186ea6-kube-api-access-jxlpg" (OuterVolumeSpecName: "kube-api-access-jxlpg") pod "30dc4098-27fd-4b55-a9bc-a66d92186ea6" (UID: "30dc4098-27fd-4b55-a9bc-a66d92186ea6"). InnerVolumeSpecName "kube-api-access-jxlpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.904824 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-config-data" (OuterVolumeSpecName: "config-data") pod "30dc4098-27fd-4b55-a9bc-a66d92186ea6" (UID: "30dc4098-27fd-4b55-a9bc-a66d92186ea6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.908896 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30dc4098-27fd-4b55-a9bc-a66d92186ea6" (UID: "30dc4098-27fd-4b55-a9bc-a66d92186ea6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.970301 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.970353 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxlpg\" (UniqueName: \"kubernetes.io/projected/30dc4098-27fd-4b55-a9bc-a66d92186ea6-kube-api-access-jxlpg\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.970362 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:14 crc kubenswrapper[4872]: I0203 06:21:14.970371 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30dc4098-27fd-4b55-a9bc-a66d92186ea6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.290346 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:15 crc kubenswrapper[4872]: W0203 06:21:15.297164 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0bb52bd_b765_49d2_908e_38755908e575.slice/crio-53799548c90f9756aa2c0f4fc91f83e0d8a69e6ab5592e2f633e306eb2156081 WatchSource:0}: Error finding container 53799548c90f9756aa2c0f4fc91f83e0d8a69e6ab5592e2f633e306eb2156081: Status 404 returned error can't find the container with id 53799548c90f9756aa2c0f4fc91f83e0d8a69e6ab5592e2f633e306eb2156081 Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.314985 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" event={"ID":"30dc4098-27fd-4b55-a9bc-a66d92186ea6","Type":"ContainerDied","Data":"a2f8b47ca5787542c8d5e853cfd8fa46e730b02d868a45e485dc6f3aaa95c571"} Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.315020 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pxmjw" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.315029 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f8b47ca5787542c8d5e853cfd8fa46e730b02d868a45e485dc6f3aaa95c571" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.318837 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="25e70206-4a21-4b3f-ab33-0d4f0edca409" containerName="nova-scheduler-scheduler" containerID="cri-o://39a8f0d683ea2edaccc66e97023c6056b5cdd318c4d26f4a84af9b7cf24cbfb7" gracePeriod=30 Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.319422 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bb52bd-b765-49d2-908e-38755908e575","Type":"ContainerStarted","Data":"53799548c90f9756aa2c0f4fc91f83e0d8a69e6ab5592e2f633e306eb2156081"} Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.399672 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 06:21:15 crc kubenswrapper[4872]: E0203 06:21:15.401808 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dc4098-27fd-4b55-a9bc-a66d92186ea6" containerName="nova-cell1-conductor-db-sync" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.401838 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dc4098-27fd-4b55-a9bc-a66d92186ea6" containerName="nova-cell1-conductor-db-sync" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.402158 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="30dc4098-27fd-4b55-a9bc-a66d92186ea6" containerName="nova-cell1-conductor-db-sync" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.403017 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.409790 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.430907 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.584466 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675jk\" (UniqueName: \"kubernetes.io/projected/d239893e-43bd-4f8f-b03e-6451a16a0865-kube-api-access-675jk\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.584575 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239893e-43bd-4f8f-b03e-6451a16a0865-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.584618 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239893e-43bd-4f8f-b03e-6451a16a0865-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.686306 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239893e-43bd-4f8f-b03e-6451a16a0865-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.686360 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239893e-43bd-4f8f-b03e-6451a16a0865-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.686457 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-675jk\" (UniqueName: \"kubernetes.io/projected/d239893e-43bd-4f8f-b03e-6451a16a0865-kube-api-access-675jk\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.691073 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239893e-43bd-4f8f-b03e-6451a16a0865-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.691155 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239893e-43bd-4f8f-b03e-6451a16a0865-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.703757 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-675jk\" (UniqueName: \"kubernetes.io/projected/d239893e-43bd-4f8f-b03e-6451a16a0865-kube-api-access-675jk\") pod \"nova-cell1-conductor-0\" (UID: \"d239893e-43bd-4f8f-b03e-6451a16a0865\") " pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:15 crc kubenswrapper[4872]: I0203 06:21:15.775195 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:16 crc kubenswrapper[4872]: I0203 06:21:16.132171 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab" path="/var/lib/kubelet/pods/5f0d44fd-2793-4b6b-8a03-9e673d8ed0ab/volumes" Feb 03 06:21:16 crc kubenswrapper[4872]: I0203 06:21:16.223394 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 06:21:16 crc kubenswrapper[4872]: I0203 06:21:16.328938 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bb52bd-b765-49d2-908e-38755908e575","Type":"ContainerStarted","Data":"d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c"} Feb 03 06:21:16 crc kubenswrapper[4872]: I0203 06:21:16.328985 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bb52bd-b765-49d2-908e-38755908e575","Type":"ContainerStarted","Data":"d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401"} Feb 03 06:21:16 crc kubenswrapper[4872]: I0203 06:21:16.331793 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d239893e-43bd-4f8f-b03e-6451a16a0865","Type":"ContainerStarted","Data":"874c166ecc2da37d01f3a3e9829b24c517da6f87d5adb1765899f4bc9d3e4f31"} Feb 03 06:21:16 crc kubenswrapper[4872]: I0203 06:21:16.354448 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.354429899 podStartE2EDuration="2.354429899s" podCreationTimestamp="2026-02-03 06:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:16.353451165 +0000 UTC m=+1246.936142689" watchObservedRunningTime="2026-02-03 06:21:16.354429899 +0000 UTC m=+1246.937121303" Feb 03 06:21:16 crc kubenswrapper[4872]: E0203 06:21:16.691234 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39a8f0d683ea2edaccc66e97023c6056b5cdd318c4d26f4a84af9b7cf24cbfb7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 06:21:16 crc kubenswrapper[4872]: E0203 06:21:16.693846 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39a8f0d683ea2edaccc66e97023c6056b5cdd318c4d26f4a84af9b7cf24cbfb7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 06:21:16 crc kubenswrapper[4872]: E0203 06:21:16.696485 4872 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="39a8f0d683ea2edaccc66e97023c6056b5cdd318c4d26f4a84af9b7cf24cbfb7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 06:21:16 crc kubenswrapper[4872]: E0203 06:21:16.696557 4872 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="25e70206-4a21-4b3f-ab33-0d4f0edca409" containerName="nova-scheduler-scheduler" Feb 03 06:21:17 crc kubenswrapper[4872]: I0203 06:21:17.346400 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d239893e-43bd-4f8f-b03e-6451a16a0865","Type":"ContainerStarted","Data":"c16792e782f6ff94c627b94351c9442ae8a4350872b30ea4002db7bb3f32b96d"} Feb 03 06:21:17 crc kubenswrapper[4872]: I0203 06:21:17.348745 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:17 crc kubenswrapper[4872]: I0203 06:21:17.377200 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.377176711 podStartE2EDuration="2.377176711s" podCreationTimestamp="2026-02-03 06:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:17.370667343 +0000 UTC m=+1247.953358777" watchObservedRunningTime="2026-02-03 06:21:17.377176711 +0000 UTC m=+1247.959868125" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.370030 4872 generic.go:334] "Generic (PLEG): container finished" podID="25e70206-4a21-4b3f-ab33-0d4f0edca409" containerID="39a8f0d683ea2edaccc66e97023c6056b5cdd318c4d26f4a84af9b7cf24cbfb7" exitCode=0 Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.370492 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"25e70206-4a21-4b3f-ab33-0d4f0edca409","Type":"ContainerDied","Data":"39a8f0d683ea2edaccc66e97023c6056b5cdd318c4d26f4a84af9b7cf24cbfb7"} Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.373624 4872 generic.go:334] "Generic (PLEG): container finished" podID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerID="fc7075ba68a08dad63898a1ead4ed1960338df4f69b3cc422563e1e4b594fb36" exitCode=0 Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.373654 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38","Type":"ContainerDied","Data":"fc7075ba68a08dad63898a1ead4ed1960338df4f69b3cc422563e1e4b594fb36"} Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.373673 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38","Type":"ContainerDied","Data":"fb24d7f14703e3c828ed3475095d7caeef3111aadebf7c2395868d672ef9c026"} Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.373696 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb24d7f14703e3c828ed3475095d7caeef3111aadebf7c2395868d672ef9c026" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.507973 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.694514 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-combined-ca-bundle\") pod \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.694699 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-logs\") pod \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.694980 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q7rc\" (UniqueName: \"kubernetes.io/projected/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-kube-api-access-6q7rc\") pod \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.695012 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-config-data\") pod \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\" (UID: \"8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38\") " Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.695574 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-logs" (OuterVolumeSpecName: "logs") pod "8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" (UID: "8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.700640 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-kube-api-access-6q7rc" (OuterVolumeSpecName: "kube-api-access-6q7rc") pod "8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" (UID: "8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38"). InnerVolumeSpecName "kube-api-access-6q7rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.717455 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.717528 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.728183 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-config-data" (OuterVolumeSpecName: "config-data") pod "8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" (UID: "8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.735107 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" (UID: "8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.797705 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.797748 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.797760 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q7rc\" (UniqueName: \"kubernetes.io/projected/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-kube-api-access-6q7rc\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.797771 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:19 crc kubenswrapper[4872]: I0203 06:21:19.834105 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.001510 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-combined-ca-bundle\") pod \"25e70206-4a21-4b3f-ab33-0d4f0edca409\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.001704 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9js5p\" (UniqueName: \"kubernetes.io/projected/25e70206-4a21-4b3f-ab33-0d4f0edca409-kube-api-access-9js5p\") pod \"25e70206-4a21-4b3f-ab33-0d4f0edca409\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.001736 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-config-data\") pod \"25e70206-4a21-4b3f-ab33-0d4f0edca409\" (UID: \"25e70206-4a21-4b3f-ab33-0d4f0edca409\") " Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.011596 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e70206-4a21-4b3f-ab33-0d4f0edca409-kube-api-access-9js5p" (OuterVolumeSpecName: "kube-api-access-9js5p") pod "25e70206-4a21-4b3f-ab33-0d4f0edca409" (UID: "25e70206-4a21-4b3f-ab33-0d4f0edca409"). InnerVolumeSpecName "kube-api-access-9js5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.033831 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-config-data" (OuterVolumeSpecName: "config-data") pod "25e70206-4a21-4b3f-ab33-0d4f0edca409" (UID: "25e70206-4a21-4b3f-ab33-0d4f0edca409"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.040751 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25e70206-4a21-4b3f-ab33-0d4f0edca409" (UID: "25e70206-4a21-4b3f-ab33-0d4f0edca409"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.104613 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9js5p\" (UniqueName: \"kubernetes.io/projected/25e70206-4a21-4b3f-ab33-0d4f0edca409-kube-api-access-9js5p\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.104668 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.104680 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e70206-4a21-4b3f-ab33-0d4f0edca409-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.385236 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"25e70206-4a21-4b3f-ab33-0d4f0edca409","Type":"ContainerDied","Data":"93fe03fb508dfb9d6b758007a0963ee1a3e3777cb5ecdcb0639f91681d3861a1"} Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.385525 4872 scope.go:117] "RemoveContainer" containerID="39a8f0d683ea2edaccc66e97023c6056b5cdd318c4d26f4a84af9b7cf24cbfb7" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.385266 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.385319 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.426075 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.444244 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.472757 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.482943 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.490144 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:20 crc kubenswrapper[4872]: E0203 06:21:20.490470 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e70206-4a21-4b3f-ab33-0d4f0edca409" containerName="nova-scheduler-scheduler" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.490482 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e70206-4a21-4b3f-ab33-0d4f0edca409" containerName="nova-scheduler-scheduler" Feb 03 06:21:20 crc kubenswrapper[4872]: E0203 06:21:20.490495 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-log" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.490501 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-log" Feb 03 06:21:20 crc kubenswrapper[4872]: E0203 06:21:20.490517 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-api" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.490523 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-api" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.490848 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-log" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.490862 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" containerName="nova-api-api" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.490899 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e70206-4a21-4b3f-ab33-0d4f0edca409" containerName="nova-scheduler-scheduler" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.492144 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.511738 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.513054 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.513150 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.518117 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.519987 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.520293 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.614817 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-config-data\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.616781 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csdd6\" (UniqueName: \"kubernetes.io/projected/13bbaa37-3973-4eb7-b03a-6b90547c76b8-kube-api-access-csdd6\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.616984 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-config-data\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.617115 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.617202 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.617349 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ef4752-1c13-4c96-ad78-a9451291fd7a-logs\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.617436 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdqrc\" (UniqueName: \"kubernetes.io/projected/89ef4752-1c13-4c96-ad78-a9451291fd7a-kube-api-access-bdqrc\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.718810 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-config-data\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.718896 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.718930 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.718958 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ef4752-1c13-4c96-ad78-a9451291fd7a-logs\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.718989 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdqrc\" (UniqueName: \"kubernetes.io/projected/89ef4752-1c13-4c96-ad78-a9451291fd7a-kube-api-access-bdqrc\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.719053 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-config-data\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.719093 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csdd6\" (UniqueName: \"kubernetes.io/projected/13bbaa37-3973-4eb7-b03a-6b90547c76b8-kube-api-access-csdd6\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.720245 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ef4752-1c13-4c96-ad78-a9451291fd7a-logs\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.723797 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-config-data\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.724164 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.724181 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.730978 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-config-data\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.740103 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdqrc\" (UniqueName: \"kubernetes.io/projected/89ef4752-1c13-4c96-ad78-a9451291fd7a-kube-api-access-bdqrc\") pod \"nova-api-0\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.742549 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csdd6\" (UniqueName: \"kubernetes.io/projected/13bbaa37-3973-4eb7-b03a-6b90547c76b8-kube-api-access-csdd6\") pod \"nova-scheduler-0\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " pod="openstack/nova-scheduler-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.835522 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:20 crc kubenswrapper[4872]: I0203 06:21:20.846273 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:21:21 crc kubenswrapper[4872]: W0203 06:21:21.352468 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ef4752_1c13_4c96_ad78_a9451291fd7a.slice/crio-dec98314dedc7418e5cac054f1723281f56eb5eee71b924258b5d6e1ddd0f780 WatchSource:0}: Error finding container dec98314dedc7418e5cac054f1723281f56eb5eee71b924258b5d6e1ddd0f780: Status 404 returned error can't find the container with id dec98314dedc7418e5cac054f1723281f56eb5eee71b924258b5d6e1ddd0f780 Feb 03 06:21:21 crc kubenswrapper[4872]: I0203 06:21:21.355846 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:21 crc kubenswrapper[4872]: I0203 06:21:21.367114 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:21 crc kubenswrapper[4872]: W0203 06:21:21.385446 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13bbaa37_3973_4eb7_b03a_6b90547c76b8.slice/crio-b132bba7d346cdf25038e86009771350776365f460540db09cc8b6ecf57b5b6d WatchSource:0}: Error finding container b132bba7d346cdf25038e86009771350776365f460540db09cc8b6ecf57b5b6d: Status 404 returned error can't find the container with id b132bba7d346cdf25038e86009771350776365f460540db09cc8b6ecf57b5b6d Feb 03 06:21:21 crc kubenswrapper[4872]: I0203 06:21:21.402580 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89ef4752-1c13-4c96-ad78-a9451291fd7a","Type":"ContainerStarted","Data":"dec98314dedc7418e5cac054f1723281f56eb5eee71b924258b5d6e1ddd0f780"} Feb 03 06:21:21 crc kubenswrapper[4872]: I0203 06:21:21.404090 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13bbaa37-3973-4eb7-b03a-6b90547c76b8","Type":"ContainerStarted","Data":"b132bba7d346cdf25038e86009771350776365f460540db09cc8b6ecf57b5b6d"} Feb 03 06:21:22 crc kubenswrapper[4872]: I0203 06:21:22.134327 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e70206-4a21-4b3f-ab33-0d4f0edca409" path="/var/lib/kubelet/pods/25e70206-4a21-4b3f-ab33-0d4f0edca409/volumes" Feb 03 06:21:22 crc kubenswrapper[4872]: I0203 06:21:22.135251 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38" path="/var/lib/kubelet/pods/8bdcc8ae-c0ca-4ecf-a2f6-1d8a3bc5ba38/volumes" Feb 03 06:21:22 crc kubenswrapper[4872]: I0203 06:21:22.418031 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13bbaa37-3973-4eb7-b03a-6b90547c76b8","Type":"ContainerStarted","Data":"7d32e03d7e5b3e08c5358e90e0a3d1b04b2a6d93a1bab521915a59ff473640c8"} Feb 03 06:21:22 crc kubenswrapper[4872]: I0203 06:21:22.421739 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89ef4752-1c13-4c96-ad78-a9451291fd7a","Type":"ContainerStarted","Data":"19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12"} Feb 03 06:21:22 crc kubenswrapper[4872]: I0203 06:21:22.422534 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89ef4752-1c13-4c96-ad78-a9451291fd7a","Type":"ContainerStarted","Data":"f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb"} Feb 03 06:21:22 crc kubenswrapper[4872]: I0203 06:21:22.456035 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.456005785 podStartE2EDuration="2.456005785s" podCreationTimestamp="2026-02-03 06:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:22.441283396 +0000 UTC m=+1253.023974850" watchObservedRunningTime="2026-02-03 06:21:22.456005785 +0000 UTC m=+1253.038697239" Feb 03 06:21:22 crc kubenswrapper[4872]: I0203 06:21:22.473466 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.473448689 podStartE2EDuration="2.473448689s" podCreationTimestamp="2026-02-03 06:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:22.468811326 +0000 UTC m=+1253.051502750" watchObservedRunningTime="2026-02-03 06:21:22.473448689 +0000 UTC m=+1253.056140103" Feb 03 06:21:24 crc kubenswrapper[4872]: I0203 06:21:24.715883 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 06:21:24 crc kubenswrapper[4872]: I0203 06:21:24.716180 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 06:21:25 crc kubenswrapper[4872]: I0203 06:21:25.220614 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 06:21:25 crc kubenswrapper[4872]: I0203 06:21:25.728926 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:21:25 crc kubenswrapper[4872]: I0203 06:21:25.728998 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:21:25 crc kubenswrapper[4872]: I0203 06:21:25.802393 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 03 06:21:25 crc kubenswrapper[4872]: I0203 06:21:25.847574 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 06:21:28 crc kubenswrapper[4872]: I0203 06:21:28.939639 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:21:28 crc kubenswrapper[4872]: I0203 06:21:28.940132 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f2a15446-559d-442b-859c-783ab8e7a828" containerName="kube-state-metrics" containerID="cri-o://64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19" gracePeriod=30 Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.440918 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.490526 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sz2n\" (UniqueName: \"kubernetes.io/projected/f2a15446-559d-442b-859c-783ab8e7a828-kube-api-access-6sz2n\") pod \"f2a15446-559d-442b-859c-783ab8e7a828\" (UID: \"f2a15446-559d-442b-859c-783ab8e7a828\") " Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.498984 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a15446-559d-442b-859c-783ab8e7a828-kube-api-access-6sz2n" (OuterVolumeSpecName: "kube-api-access-6sz2n") pod "f2a15446-559d-442b-859c-783ab8e7a828" (UID: "f2a15446-559d-442b-859c-783ab8e7a828"). InnerVolumeSpecName "kube-api-access-6sz2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.503522 4872 generic.go:334] "Generic (PLEG): container finished" podID="f2a15446-559d-442b-859c-783ab8e7a828" containerID="64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19" exitCode=2 Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.503795 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f2a15446-559d-442b-859c-783ab8e7a828","Type":"ContainerDied","Data":"64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19"} Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.504232 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f2a15446-559d-442b-859c-783ab8e7a828","Type":"ContainerDied","Data":"0294d2de204483bfa1709c78677b2f48ba5c5e0a6428b05688acf99098f05c92"} Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.504262 4872 scope.go:117] "RemoveContainer" containerID="64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.503816 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.559977 4872 scope.go:117] "RemoveContainer" containerID="64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19" Feb 03 06:21:29 crc kubenswrapper[4872]: E0203 06:21:29.560510 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19\": container with ID starting with 64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19 not found: ID does not exist" containerID="64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.560549 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19"} err="failed to get container status \"64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19\": rpc error: code = NotFound desc = could not find container \"64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19\": container with ID starting with 64dd9d6c1c99c883ea3960347844787b4781c4a2157942eef6362089e2716c19 not found: ID does not exist" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.564118 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.583523 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.593193 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sz2n\" (UniqueName: \"kubernetes.io/projected/f2a15446-559d-442b-859c-783ab8e7a828-kube-api-access-6sz2n\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.596123 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:21:29 crc kubenswrapper[4872]: E0203 06:21:29.596505 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a15446-559d-442b-859c-783ab8e7a828" containerName="kube-state-metrics" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.596523 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a15446-559d-442b-859c-783ab8e7a828" containerName="kube-state-metrics" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.596735 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a15446-559d-442b-859c-783ab8e7a828" containerName="kube-state-metrics" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.597349 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.599802 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.601300 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.612831 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.695017 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.695085 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.695113 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtnh\" (UniqueName: \"kubernetes.io/projected/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-api-access-djtnh\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.695410 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.796299 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.796615 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.796726 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.796820 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djtnh\" (UniqueName: \"kubernetes.io/projected/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-api-access-djtnh\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.811299 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.812834 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.814470 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.815041 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtnh\" (UniqueName: \"kubernetes.io/projected/d26e8416-4ddb-40f1-bfa1-482da12274a3-kube-api-access-djtnh\") pod \"kube-state-metrics-0\" (UID: \"d26e8416-4ddb-40f1-bfa1-482da12274a3\") " pod="openstack/kube-state-metrics-0" Feb 03 06:21:29 crc kubenswrapper[4872]: I0203 06:21:29.916033 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.134029 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a15446-559d-442b-859c-783ab8e7a828" path="/var/lib/kubelet/pods/f2a15446-559d-442b-859c-783ab8e7a828/volumes" Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.387073 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.387800 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.514238 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d26e8416-4ddb-40f1-bfa1-482da12274a3","Type":"ContainerStarted","Data":"8402cc980a404aa2421861024658605b7305e6a47529a19a209391cea25d1019"} Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.792753 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.793199 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="ceilometer-central-agent" containerID="cri-o://415114c15d51347effca4b0ccc8c18462409daa4200985fcc95cb0ce5e7cf3da" gracePeriod=30 Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.793271 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="sg-core" containerID="cri-o://fc3b41323d4de808a2039ee877898cbea1c31ecca78c822b67d5f4ddb00638c5" gracePeriod=30 Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.793407 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="proxy-httpd" containerID="cri-o://1aecb7fdcadd44568d4fff18aff8097658ea9e802bb459aaa6b6138b4d58c602" gracePeriod=30 Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.793510 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="ceilometer-notification-agent" containerID="cri-o://a029b58c464fa2bbb44a6a4b0bd076717230c4008f3fff5fb17d49c7a5cd85ef" gracePeriod=30 Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.836772 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.837502 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.846989 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 06:21:30 crc kubenswrapper[4872]: I0203 06:21:30.880887 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.271108 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.271151 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.542194 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d26e8416-4ddb-40f1-bfa1-482da12274a3","Type":"ContainerStarted","Data":"4ef636fc494433b26ba2e40d0122e6521dc984c5b6ddc96315209e51c5bcf71a"} Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.542465 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.552077 4872 generic.go:334] "Generic (PLEG): container finished" podID="74254242-70b2-4a96-832c-b14d9469fb55" containerID="1aecb7fdcadd44568d4fff18aff8097658ea9e802bb459aaa6b6138b4d58c602" exitCode=0 Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.552110 4872 generic.go:334] "Generic (PLEG): container finished" podID="74254242-70b2-4a96-832c-b14d9469fb55" containerID="fc3b41323d4de808a2039ee877898cbea1c31ecca78c822b67d5f4ddb00638c5" exitCode=2 Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.552119 4872 generic.go:334] "Generic (PLEG): container finished" podID="74254242-70b2-4a96-832c-b14d9469fb55" containerID="a029b58c464fa2bbb44a6a4b0bd076717230c4008f3fff5fb17d49c7a5cd85ef" exitCode=0 Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.552126 4872 generic.go:334] "Generic (PLEG): container finished" podID="74254242-70b2-4a96-832c-b14d9469fb55" containerID="415114c15d51347effca4b0ccc8c18462409daa4200985fcc95cb0ce5e7cf3da" exitCode=0 Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.552481 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerDied","Data":"1aecb7fdcadd44568d4fff18aff8097658ea9e802bb459aaa6b6138b4d58c602"} Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.552533 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerDied","Data":"fc3b41323d4de808a2039ee877898cbea1c31ecca78c822b67d5f4ddb00638c5"} Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.552542 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerDied","Data":"a029b58c464fa2bbb44a6a4b0bd076717230c4008f3fff5fb17d49c7a5cd85ef"} Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.552552 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerDied","Data":"415114c15d51347effca4b0ccc8c18462409daa4200985fcc95cb0ce5e7cf3da"} Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.572011 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.127704803 podStartE2EDuration="2.571992566s" podCreationTimestamp="2026-02-03 06:21:29 +0000 UTC" firstStartedPulling="2026-02-03 06:21:30.386645797 +0000 UTC m=+1260.969337241" lastFinishedPulling="2026-02-03 06:21:30.83093359 +0000 UTC m=+1261.413625004" observedRunningTime="2026-02-03 06:21:31.556745376 +0000 UTC m=+1262.139436790" watchObservedRunningTime="2026-02-03 06:21:31.571992566 +0000 UTC m=+1262.154683970" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.601930 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.805600 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.918873 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.918895 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.934763 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-log-httpd\") pod \"74254242-70b2-4a96-832c-b14d9469fb55\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.934823 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mst2r\" (UniqueName: \"kubernetes.io/projected/74254242-70b2-4a96-832c-b14d9469fb55-kube-api-access-mst2r\") pod \"74254242-70b2-4a96-832c-b14d9469fb55\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.934875 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-scripts\") pod \"74254242-70b2-4a96-832c-b14d9469fb55\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.934904 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-sg-core-conf-yaml\") pod \"74254242-70b2-4a96-832c-b14d9469fb55\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.934963 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-run-httpd\") pod \"74254242-70b2-4a96-832c-b14d9469fb55\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.935052 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-combined-ca-bundle\") pod \"74254242-70b2-4a96-832c-b14d9469fb55\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.935075 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-config-data\") pod \"74254242-70b2-4a96-832c-b14d9469fb55\" (UID: \"74254242-70b2-4a96-832c-b14d9469fb55\") " Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.936150 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "74254242-70b2-4a96-832c-b14d9469fb55" (UID: "74254242-70b2-4a96-832c-b14d9469fb55"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.936469 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "74254242-70b2-4a96-832c-b14d9469fb55" (UID: "74254242-70b2-4a96-832c-b14d9469fb55"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.941583 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74254242-70b2-4a96-832c-b14d9469fb55-kube-api-access-mst2r" (OuterVolumeSpecName: "kube-api-access-mst2r") pod "74254242-70b2-4a96-832c-b14d9469fb55" (UID: "74254242-70b2-4a96-832c-b14d9469fb55"). InnerVolumeSpecName "kube-api-access-mst2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.941994 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-scripts" (OuterVolumeSpecName: "scripts") pod "74254242-70b2-4a96-832c-b14d9469fb55" (UID: "74254242-70b2-4a96-832c-b14d9469fb55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:31 crc kubenswrapper[4872]: I0203 06:21:31.965526 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "74254242-70b2-4a96-832c-b14d9469fb55" (UID: "74254242-70b2-4a96-832c-b14d9469fb55"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.017860 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74254242-70b2-4a96-832c-b14d9469fb55" (UID: "74254242-70b2-4a96-832c-b14d9469fb55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.036772 4872 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.036808 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mst2r\" (UniqueName: \"kubernetes.io/projected/74254242-70b2-4a96-832c-b14d9469fb55-kube-api-access-mst2r\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.036819 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.036827 4872 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.036835 4872 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74254242-70b2-4a96-832c-b14d9469fb55-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.036843 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.047529 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-config-data" (OuterVolumeSpecName: "config-data") pod "74254242-70b2-4a96-832c-b14d9469fb55" (UID: "74254242-70b2-4a96-832c-b14d9469fb55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.138027 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74254242-70b2-4a96-832c-b14d9469fb55-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.563923 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74254242-70b2-4a96-832c-b14d9469fb55","Type":"ContainerDied","Data":"51e5e04aa03303c02fccdd96d59f269b2ef378b9fafbd48c950d45615c857d2d"} Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.564077 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.564457 4872 scope.go:117] "RemoveContainer" containerID="1aecb7fdcadd44568d4fff18aff8097658ea9e802bb459aaa6b6138b4d58c602" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.590840 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.597893 4872 scope.go:117] "RemoveContainer" containerID="fc3b41323d4de808a2039ee877898cbea1c31ecca78c822b67d5f4ddb00638c5" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.603463 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.618763 4872 scope.go:117] "RemoveContainer" containerID="a029b58c464fa2bbb44a6a4b0bd076717230c4008f3fff5fb17d49c7a5cd85ef" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.642966 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:32 crc kubenswrapper[4872]: E0203 06:21:32.646250 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="ceilometer-central-agent" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.646364 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="ceilometer-central-agent" Feb 03 06:21:32 crc kubenswrapper[4872]: E0203 06:21:32.646439 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="proxy-httpd" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.646520 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="proxy-httpd" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.648256 4872 scope.go:117] "RemoveContainer" containerID="415114c15d51347effca4b0ccc8c18462409daa4200985fcc95cb0ce5e7cf3da" Feb 03 06:21:32 crc kubenswrapper[4872]: E0203 06:21:32.649651 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="ceilometer-notification-agent" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.649766 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="ceilometer-notification-agent" Feb 03 06:21:32 crc kubenswrapper[4872]: E0203 06:21:32.649840 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="sg-core" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.649898 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="sg-core" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.650259 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="ceilometer-notification-agent" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.650335 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="sg-core" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.650406 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="proxy-httpd" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.650468 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="74254242-70b2-4a96-832c-b14d9469fb55" containerName="ceilometer-central-agent" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.657005 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.665857 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.666066 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.666202 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.666521 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.850043 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.850405 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.850543 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-run-httpd\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.850644 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-scripts\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.850784 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-config-data\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.850927 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgw8g\" (UniqueName: \"kubernetes.io/projected/d41eb0ca-227e-4732-8f2b-561b10a9d93a-kube-api-access-qgw8g\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.851041 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.851204 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-log-httpd\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.952860 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgw8g\" (UniqueName: \"kubernetes.io/projected/d41eb0ca-227e-4732-8f2b-561b10a9d93a-kube-api-access-qgw8g\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.953146 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.953308 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-log-httpd\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.953463 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.953591 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.953717 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-run-httpd\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.953831 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-scripts\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.953929 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-config-data\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.954576 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-log-httpd\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.954911 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-run-httpd\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.958782 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-config-data\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.960890 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.961615 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-scripts\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.970534 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.980181 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.982295 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgw8g\" (UniqueName: \"kubernetes.io/projected/d41eb0ca-227e-4732-8f2b-561b10a9d93a-kube-api-access-qgw8g\") pod \"ceilometer-0\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " pod="openstack/ceilometer-0" Feb 03 06:21:32 crc kubenswrapper[4872]: I0203 06:21:32.989397 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:21:33 crc kubenswrapper[4872]: I0203 06:21:33.489899 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:33 crc kubenswrapper[4872]: I0203 06:21:33.577239 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerStarted","Data":"efe9955070714bf7cf35b69b60476fe4d3e5d812b481900f98eefbdfdbb66784"} Feb 03 06:21:34 crc kubenswrapper[4872]: I0203 06:21:34.135095 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74254242-70b2-4a96-832c-b14d9469fb55" path="/var/lib/kubelet/pods/74254242-70b2-4a96-832c-b14d9469fb55/volumes" Feb 03 06:21:34 crc kubenswrapper[4872]: I0203 06:21:34.589448 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerStarted","Data":"70294c4e48d1f7a2b9a92e5e03ded4e711c828206f7ba04abaf8587f3872bd85"} Feb 03 06:21:34 crc kubenswrapper[4872]: I0203 06:21:34.719599 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 06:21:34 crc kubenswrapper[4872]: I0203 06:21:34.721646 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 06:21:34 crc kubenswrapper[4872]: I0203 06:21:34.726594 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 06:21:35 crc kubenswrapper[4872]: I0203 06:21:35.605811 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerStarted","Data":"940a1875c6b034e433665780fba745cc1e25b746b6ff90067fb88c59365d69d7"} Feb 03 06:21:35 crc kubenswrapper[4872]: I0203 06:21:35.613170 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 06:21:36 crc kubenswrapper[4872]: I0203 06:21:36.614318 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerStarted","Data":"b697c16bc99a8d7faa2b7c94eefb29e21cfec6fc8923898672a1f7dc2f2f041e"} Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.441011 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.578587 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgw9l\" (UniqueName: \"kubernetes.io/projected/fa342b9a-b292-4822-939d-201a76172c94-kube-api-access-wgw9l\") pod \"fa342b9a-b292-4822-939d-201a76172c94\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.578633 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-combined-ca-bundle\") pod \"fa342b9a-b292-4822-939d-201a76172c94\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.578678 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-config-data\") pod \"fa342b9a-b292-4822-939d-201a76172c94\" (UID: \"fa342b9a-b292-4822-939d-201a76172c94\") " Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.588394 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa342b9a-b292-4822-939d-201a76172c94-kube-api-access-wgw9l" (OuterVolumeSpecName: "kube-api-access-wgw9l") pod "fa342b9a-b292-4822-939d-201a76172c94" (UID: "fa342b9a-b292-4822-939d-201a76172c94"). InnerVolumeSpecName "kube-api-access-wgw9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.613654 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-config-data" (OuterVolumeSpecName: "config-data") pod "fa342b9a-b292-4822-939d-201a76172c94" (UID: "fa342b9a-b292-4822-939d-201a76172c94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.627312 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa342b9a-b292-4822-939d-201a76172c94" (UID: "fa342b9a-b292-4822-939d-201a76172c94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.638742 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerStarted","Data":"84a001a660d33bd571047cbbc6d98debc0442bd213adbe39105afeec61826d72"} Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.639069 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.642225 4872 generic.go:334] "Generic (PLEG): container finished" podID="fa342b9a-b292-4822-939d-201a76172c94" containerID="bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848" exitCode=137 Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.642277 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa342b9a-b292-4822-939d-201a76172c94","Type":"ContainerDied","Data":"bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848"} Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.642306 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fa342b9a-b292-4822-939d-201a76172c94","Type":"ContainerDied","Data":"637c21fed12b330ab40e35885ed5993650bd9a14a0fd5fcafdd76fd0888efe56"} Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.642323 4872 scope.go:117] "RemoveContainer" containerID="bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.642443 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.681943 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgw9l\" (UniqueName: \"kubernetes.io/projected/fa342b9a-b292-4822-939d-201a76172c94-kube-api-access-wgw9l\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.681968 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.681977 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa342b9a-b292-4822-939d-201a76172c94-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.694651 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.733370232 podStartE2EDuration="6.694627674s" podCreationTimestamp="2026-02-03 06:21:32 +0000 UTC" firstStartedPulling="2026-02-03 06:21:33.503490348 +0000 UTC m=+1264.086181782" lastFinishedPulling="2026-02-03 06:21:37.4647478 +0000 UTC m=+1268.047439224" observedRunningTime="2026-02-03 06:21:38.675886577 +0000 UTC m=+1269.258577991" watchObservedRunningTime="2026-02-03 06:21:38.694627674 +0000 UTC m=+1269.277319088" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.715643 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.719526 4872 scope.go:117] "RemoveContainer" containerID="bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848" Feb 03 06:21:38 crc kubenswrapper[4872]: E0203 06:21:38.719945 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848\": container with ID starting with bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848 not found: ID does not exist" containerID="bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.719976 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848"} err="failed to get container status \"bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848\": rpc error: code = NotFound desc = could not find container \"bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848\": container with ID starting with bd251fa96611a858ced253a2ed0c375cd7b79ff380c3997b9d9907762083c848 not found: ID does not exist" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.740615 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.752246 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:38 crc kubenswrapper[4872]: E0203 06:21:38.752906 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa342b9a-b292-4822-939d-201a76172c94" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.752975 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa342b9a-b292-4822-939d-201a76172c94" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.753206 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa342b9a-b292-4822-939d-201a76172c94" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.753919 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.757990 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.758191 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.758615 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.760823 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.886609 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdq56\" (UniqueName: \"kubernetes.io/projected/756dc212-f1ae-44f8-bcb1-d5c4180da686-kube-api-access-fdq56\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.886683 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.886938 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.886983 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.887045 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.989320 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.989396 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.989465 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.989573 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdq56\" (UniqueName: \"kubernetes.io/projected/756dc212-f1ae-44f8-bcb1-d5c4180da686-kube-api-access-fdq56\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:38 crc kubenswrapper[4872]: I0203 06:21:38.989609 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.004306 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.012403 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.015801 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.023476 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756dc212-f1ae-44f8-bcb1-d5c4180da686-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.026450 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdq56\" (UniqueName: \"kubernetes.io/projected/756dc212-f1ae-44f8-bcb1-d5c4180da686-kube-api-access-fdq56\") pod \"nova-cell1-novncproxy-0\" (UID: \"756dc212-f1ae-44f8-bcb1-d5c4180da686\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.074290 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.549423 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.681773 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"756dc212-f1ae-44f8-bcb1-d5c4180da686","Type":"ContainerStarted","Data":"29f6d68f25d8fcc8f12d820dd26dee351da5ee7ae7ac47db17365da5e5deca2d"} Feb 03 06:21:39 crc kubenswrapper[4872]: I0203 06:21:39.951057 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 06:21:40 crc kubenswrapper[4872]: I0203 06:21:40.140039 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa342b9a-b292-4822-939d-201a76172c94" path="/var/lib/kubelet/pods/fa342b9a-b292-4822-939d-201a76172c94/volumes" Feb 03 06:21:40 crc kubenswrapper[4872]: I0203 06:21:40.691504 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"756dc212-f1ae-44f8-bcb1-d5c4180da686","Type":"ContainerStarted","Data":"fa7b54a39b6f9c6f4fd167f89c40afd9864263e1eab289697293f1453751d13a"} Feb 03 06:21:40 crc kubenswrapper[4872]: I0203 06:21:40.725245 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.725220886 podStartE2EDuration="2.725220886s" podCreationTimestamp="2026-02-03 06:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:40.712470286 +0000 UTC m=+1271.295161710" watchObservedRunningTime="2026-02-03 06:21:40.725220886 +0000 UTC m=+1271.307912300" Feb 03 06:21:40 crc kubenswrapper[4872]: I0203 06:21:40.847991 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 06:21:40 crc kubenswrapper[4872]: I0203 06:21:40.848438 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 06:21:40 crc kubenswrapper[4872]: I0203 06:21:40.853246 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 06:21:40 crc kubenswrapper[4872]: I0203 06:21:40.859603 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 06:21:41 crc kubenswrapper[4872]: I0203 06:21:41.704727 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 06:21:41 crc kubenswrapper[4872]: I0203 06:21:41.715958 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 06:21:41 crc kubenswrapper[4872]: I0203 06:21:41.950109 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-vfqsm"] Feb 03 06:21:41 crc kubenswrapper[4872]: I0203 06:21:41.952334 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:41 crc kubenswrapper[4872]: I0203 06:21:41.967322 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-vfqsm"] Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.069122 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.069199 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6mc\" (UniqueName: \"kubernetes.io/projected/d1151951-b500-49fb-99b3-23c375b6f493-kube-api-access-fr6mc\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.069239 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.069274 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.069305 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.069337 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-config\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.170846 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr6mc\" (UniqueName: \"kubernetes.io/projected/d1151951-b500-49fb-99b3-23c375b6f493-kube-api-access-fr6mc\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.171081 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.171175 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.171341 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.171490 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-config\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.172199 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.172136 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.172160 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-config\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.171950 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.172497 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.173091 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.191659 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr6mc\" (UniqueName: \"kubernetes.io/projected/d1151951-b500-49fb-99b3-23c375b6f493-kube-api-access-fr6mc\") pod \"dnsmasq-dns-cd5cbd7b9-vfqsm\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.295769 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:42 crc kubenswrapper[4872]: I0203 06:21:42.758064 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-vfqsm"] Feb 03 06:21:42 crc kubenswrapper[4872]: W0203 06:21:42.767172 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1151951_b500_49fb_99b3_23c375b6f493.slice/crio-36927aa1665ad21cb10587cf1538c32aac6ec93240a2ccf4d3b1bb9b5e0816e5 WatchSource:0}: Error finding container 36927aa1665ad21cb10587cf1538c32aac6ec93240a2ccf4d3b1bb9b5e0816e5: Status 404 returned error can't find the container with id 36927aa1665ad21cb10587cf1538c32aac6ec93240a2ccf4d3b1bb9b5e0816e5 Feb 03 06:21:43 crc kubenswrapper[4872]: I0203 06:21:43.721230 4872 generic.go:334] "Generic (PLEG): container finished" podID="d1151951-b500-49fb-99b3-23c375b6f493" containerID="43c3a070f464dd41025b59d6a4b616dc66cef0e30dd20088fd4b7014491fd718" exitCode=0 Feb 03 06:21:43 crc kubenswrapper[4872]: I0203 06:21:43.721438 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" event={"ID":"d1151951-b500-49fb-99b3-23c375b6f493","Type":"ContainerDied","Data":"43c3a070f464dd41025b59d6a4b616dc66cef0e30dd20088fd4b7014491fd718"} Feb 03 06:21:43 crc kubenswrapper[4872]: I0203 06:21:43.722415 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" event={"ID":"d1151951-b500-49fb-99b3-23c375b6f493","Type":"ContainerStarted","Data":"36927aa1665ad21cb10587cf1538c32aac6ec93240a2ccf4d3b1bb9b5e0816e5"} Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.075484 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.277707 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.608211 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.608528 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="sg-core" containerID="cri-o://b697c16bc99a8d7faa2b7c94eefb29e21cfec6fc8923898672a1f7dc2f2f041e" gracePeriod=30 Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.608634 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="ceilometer-notification-agent" containerID="cri-o://940a1875c6b034e433665780fba745cc1e25b746b6ff90067fb88c59365d69d7" gracePeriod=30 Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.608802 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="proxy-httpd" containerID="cri-o://84a001a660d33bd571047cbbc6d98debc0442bd213adbe39105afeec61826d72" gracePeriod=30 Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.608487 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="ceilometer-central-agent" containerID="cri-o://70294c4e48d1f7a2b9a92e5e03ded4e711c828206f7ba04abaf8587f3872bd85" gracePeriod=30 Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.748556 4872 generic.go:334] "Generic (PLEG): container finished" podID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerID="b697c16bc99a8d7faa2b7c94eefb29e21cfec6fc8923898672a1f7dc2f2f041e" exitCode=2 Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.748635 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerDied","Data":"b697c16bc99a8d7faa2b7c94eefb29e21cfec6fc8923898672a1f7dc2f2f041e"} Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.751725 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-log" containerID="cri-o://f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb" gracePeriod=30 Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.752353 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" event={"ID":"d1151951-b500-49fb-99b3-23c375b6f493","Type":"ContainerStarted","Data":"9b97138e5d8999006d6b381f0be9e891c88df9673435649f2f6888f63c8b06e4"} Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.752910 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-api" containerID="cri-o://19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12" gracePeriod=30 Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.753711 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:44 crc kubenswrapper[4872]: I0203 06:21:44.802484 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" podStartSLOduration=3.8024623220000002 podStartE2EDuration="3.802462322s" podCreationTimestamp="2026-02-03 06:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:44.790331246 +0000 UTC m=+1275.373022660" watchObservedRunningTime="2026-02-03 06:21:44.802462322 +0000 UTC m=+1275.385153736" Feb 03 06:21:45 crc kubenswrapper[4872]: E0203 06:21:45.013341 4872 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd41eb0ca_227e_4732_8f2b_561b10a9d93a.slice/crio-conmon-84a001a660d33bd571047cbbc6d98debc0442bd213adbe39105afeec61826d72.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ef4752_1c13_4c96_ad78_a9451291fd7a.slice/crio-f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd41eb0ca_227e_4732_8f2b_561b10a9d93a.slice/crio-84a001a660d33bd571047cbbc6d98debc0442bd213adbe39105afeec61826d72.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd41eb0ca_227e_4732_8f2b_561b10a9d93a.slice/crio-70294c4e48d1f7a2b9a92e5e03ded4e711c828206f7ba04abaf8587f3872bd85.scope\": RecentStats: unable to find data in memory cache]" Feb 03 06:21:45 crc kubenswrapper[4872]: I0203 06:21:45.765233 4872 generic.go:334] "Generic (PLEG): container finished" podID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerID="84a001a660d33bd571047cbbc6d98debc0442bd213adbe39105afeec61826d72" exitCode=0 Feb 03 06:21:45 crc kubenswrapper[4872]: I0203 06:21:45.765494 4872 generic.go:334] "Generic (PLEG): container finished" podID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerID="70294c4e48d1f7a2b9a92e5e03ded4e711c828206f7ba04abaf8587f3872bd85" exitCode=0 Feb 03 06:21:45 crc kubenswrapper[4872]: I0203 06:21:45.765305 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerDied","Data":"84a001a660d33bd571047cbbc6d98debc0442bd213adbe39105afeec61826d72"} Feb 03 06:21:45 crc kubenswrapper[4872]: I0203 06:21:45.765559 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerDied","Data":"70294c4e48d1f7a2b9a92e5e03ded4e711c828206f7ba04abaf8587f3872bd85"} Feb 03 06:21:45 crc kubenswrapper[4872]: I0203 06:21:45.767379 4872 generic.go:334] "Generic (PLEG): container finished" podID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerID="f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb" exitCode=143 Feb 03 06:21:45 crc kubenswrapper[4872]: I0203 06:21:45.767442 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89ef4752-1c13-4c96-ad78-a9451291fd7a","Type":"ContainerDied","Data":"f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb"} Feb 03 06:21:46 crc kubenswrapper[4872]: I0203 06:21:46.781784 4872 generic.go:334] "Generic (PLEG): container finished" podID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerID="940a1875c6b034e433665780fba745cc1e25b746b6ff90067fb88c59365d69d7" exitCode=0 Feb 03 06:21:46 crc kubenswrapper[4872]: I0203 06:21:46.781874 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerDied","Data":"940a1875c6b034e433665780fba745cc1e25b746b6ff90067fb88c59365d69d7"} Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.166700 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.191463 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-scripts\") pod \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.191524 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-sg-core-conf-yaml\") pod \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.191568 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-run-httpd\") pod \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.191934 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d41eb0ca-227e-4732-8f2b-561b10a9d93a" (UID: "d41eb0ca-227e-4732-8f2b-561b10a9d93a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.191654 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-log-httpd\") pod \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.192258 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d41eb0ca-227e-4732-8f2b-561b10a9d93a" (UID: "d41eb0ca-227e-4732-8f2b-561b10a9d93a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.192326 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-combined-ca-bundle\") pod \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.192352 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-ceilometer-tls-certs\") pod \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.192394 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-config-data\") pod \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.192419 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgw8g\" (UniqueName: \"kubernetes.io/projected/d41eb0ca-227e-4732-8f2b-561b10a9d93a-kube-api-access-qgw8g\") pod \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\" (UID: \"d41eb0ca-227e-4732-8f2b-561b10a9d93a\") " Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.193142 4872 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.193156 4872 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d41eb0ca-227e-4732-8f2b-561b10a9d93a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.209183 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-scripts" (OuterVolumeSpecName: "scripts") pod "d41eb0ca-227e-4732-8f2b-561b10a9d93a" (UID: "d41eb0ca-227e-4732-8f2b-561b10a9d93a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.212138 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d41eb0ca-227e-4732-8f2b-561b10a9d93a-kube-api-access-qgw8g" (OuterVolumeSpecName: "kube-api-access-qgw8g") pod "d41eb0ca-227e-4732-8f2b-561b10a9d93a" (UID: "d41eb0ca-227e-4732-8f2b-561b10a9d93a"). InnerVolumeSpecName "kube-api-access-qgw8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.276807 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d41eb0ca-227e-4732-8f2b-561b10a9d93a" (UID: "d41eb0ca-227e-4732-8f2b-561b10a9d93a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.277581 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d41eb0ca-227e-4732-8f2b-561b10a9d93a" (UID: "d41eb0ca-227e-4732-8f2b-561b10a9d93a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.295173 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.295238 4872 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.295252 4872 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.295264 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgw8g\" (UniqueName: \"kubernetes.io/projected/d41eb0ca-227e-4732-8f2b-561b10a9d93a-kube-api-access-qgw8g\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.314945 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d41eb0ca-227e-4732-8f2b-561b10a9d93a" (UID: "d41eb0ca-227e-4732-8f2b-561b10a9d93a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.322452 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-config-data" (OuterVolumeSpecName: "config-data") pod "d41eb0ca-227e-4732-8f2b-561b10a9d93a" (UID: "d41eb0ca-227e-4732-8f2b-561b10a9d93a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.396783 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.396810 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d41eb0ca-227e-4732-8f2b-561b10a9d93a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.803984 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d41eb0ca-227e-4732-8f2b-561b10a9d93a","Type":"ContainerDied","Data":"efe9955070714bf7cf35b69b60476fe4d3e5d812b481900f98eefbdfdbb66784"} Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.804048 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.804065 4872 scope.go:117] "RemoveContainer" containerID="84a001a660d33bd571047cbbc6d98debc0442bd213adbe39105afeec61826d72" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.838006 4872 scope.go:117] "RemoveContainer" containerID="b697c16bc99a8d7faa2b7c94eefb29e21cfec6fc8923898672a1f7dc2f2f041e" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.862546 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.881489 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.897243 4872 scope.go:117] "RemoveContainer" containerID="940a1875c6b034e433665780fba745cc1e25b746b6ff90067fb88c59365d69d7" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.911184 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:47 crc kubenswrapper[4872]: E0203 06:21:47.911611 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="ceilometer-notification-agent" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.911627 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="ceilometer-notification-agent" Feb 03 06:21:47 crc kubenswrapper[4872]: E0203 06:21:47.911673 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="ceilometer-central-agent" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.911701 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="ceilometer-central-agent" Feb 03 06:21:47 crc kubenswrapper[4872]: E0203 06:21:47.911716 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="proxy-httpd" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.911726 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="proxy-httpd" Feb 03 06:21:47 crc kubenswrapper[4872]: E0203 06:21:47.911753 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="sg-core" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.911760 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="sg-core" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.912176 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="proxy-httpd" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.912204 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="ceilometer-central-agent" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.912221 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="ceilometer-notification-agent" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.912240 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" containerName="sg-core" Feb 03 06:21:47 crc kubenswrapper[4872]: I0203 06:21:47.966996 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.007978 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.009726 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.009932 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.010086 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.014947 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/248c9cda-018d-4cea-8dc8-c6a77788155a-run-httpd\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.014988 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.015034 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-config-data\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.015062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.015088 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcxzc\" (UniqueName: \"kubernetes.io/projected/248c9cda-018d-4cea-8dc8-c6a77788155a-kube-api-access-xcxzc\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.015157 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.015201 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/248c9cda-018d-4cea-8dc8-c6a77788155a-log-httpd\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.015234 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-scripts\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.048622 4872 scope.go:117] "RemoveContainer" containerID="70294c4e48d1f7a2b9a92e5e03ded4e711c828206f7ba04abaf8587f3872bd85" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.117183 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.117569 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/248c9cda-018d-4cea-8dc8-c6a77788155a-log-httpd\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.117604 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-scripts\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.117637 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/248c9cda-018d-4cea-8dc8-c6a77788155a-run-httpd\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.117668 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.117716 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-config-data\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.117744 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.117770 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcxzc\" (UniqueName: \"kubernetes.io/projected/248c9cda-018d-4cea-8dc8-c6a77788155a-kube-api-access-xcxzc\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.118501 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/248c9cda-018d-4cea-8dc8-c6a77788155a-log-httpd\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.123400 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-scripts\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.123697 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.123769 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/248c9cda-018d-4cea-8dc8-c6a77788155a-run-httpd\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.135722 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.138408 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-config-data\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.144107 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d41eb0ca-227e-4732-8f2b-561b10a9d93a" path="/var/lib/kubelet/pods/d41eb0ca-227e-4732-8f2b-561b10a9d93a/volumes" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.157185 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/248c9cda-018d-4cea-8dc8-c6a77788155a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.157911 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcxzc\" (UniqueName: \"kubernetes.io/projected/248c9cda-018d-4cea-8dc8-c6a77788155a-kube-api-access-xcxzc\") pod \"ceilometer-0\" (UID: \"248c9cda-018d-4cea-8dc8-c6a77788155a\") " pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.336627 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.378310 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.431194 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdqrc\" (UniqueName: \"kubernetes.io/projected/89ef4752-1c13-4c96-ad78-a9451291fd7a-kube-api-access-bdqrc\") pod \"89ef4752-1c13-4c96-ad78-a9451291fd7a\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.431242 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-combined-ca-bundle\") pod \"89ef4752-1c13-4c96-ad78-a9451291fd7a\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.431271 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-config-data\") pod \"89ef4752-1c13-4c96-ad78-a9451291fd7a\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.431440 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ef4752-1c13-4c96-ad78-a9451291fd7a-logs\") pod \"89ef4752-1c13-4c96-ad78-a9451291fd7a\" (UID: \"89ef4752-1c13-4c96-ad78-a9451291fd7a\") " Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.432170 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89ef4752-1c13-4c96-ad78-a9451291fd7a-logs" (OuterVolumeSpecName: "logs") pod "89ef4752-1c13-4c96-ad78-a9451291fd7a" (UID: "89ef4752-1c13-4c96-ad78-a9451291fd7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.440393 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ef4752-1c13-4c96-ad78-a9451291fd7a-kube-api-access-bdqrc" (OuterVolumeSpecName: "kube-api-access-bdqrc") pod "89ef4752-1c13-4c96-ad78-a9451291fd7a" (UID: "89ef4752-1c13-4c96-ad78-a9451291fd7a"). InnerVolumeSpecName "kube-api-access-bdqrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.476832 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-config-data" (OuterVolumeSpecName: "config-data") pod "89ef4752-1c13-4c96-ad78-a9451291fd7a" (UID: "89ef4752-1c13-4c96-ad78-a9451291fd7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.476859 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89ef4752-1c13-4c96-ad78-a9451291fd7a" (UID: "89ef4752-1c13-4c96-ad78-a9451291fd7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.533566 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89ef4752-1c13-4c96-ad78-a9451291fd7a-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.533603 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdqrc\" (UniqueName: \"kubernetes.io/projected/89ef4752-1c13-4c96-ad78-a9451291fd7a-kube-api-access-bdqrc\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.533615 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.533624 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89ef4752-1c13-4c96-ad78-a9451291fd7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.823081 4872 generic.go:334] "Generic (PLEG): container finished" podID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerID="19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12" exitCode=0 Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.823377 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89ef4752-1c13-4c96-ad78-a9451291fd7a","Type":"ContainerDied","Data":"19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12"} Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.823401 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89ef4752-1c13-4c96-ad78-a9451291fd7a","Type":"ContainerDied","Data":"dec98314dedc7418e5cac054f1723281f56eb5eee71b924258b5d6e1ddd0f780"} Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.823417 4872 scope.go:117] "RemoveContainer" containerID="19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.823510 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.846391 4872 scope.go:117] "RemoveContainer" containerID="f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.849000 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.863571 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.877063 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.896999 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:48 crc kubenswrapper[4872]: E0203 06:21:48.897362 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-log" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.897373 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-log" Feb 03 06:21:48 crc kubenswrapper[4872]: E0203 06:21:48.897397 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-api" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.897404 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-api" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.897588 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-log" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.897600 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" containerName="nova-api-api" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.905877 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.913218 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.918363 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.919183 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.933978 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.946161 4872 scope.go:117] "RemoveContainer" containerID="19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12" Feb 03 06:21:48 crc kubenswrapper[4872]: E0203 06:21:48.946571 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12\": container with ID starting with 19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12 not found: ID does not exist" containerID="19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.946602 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12"} err="failed to get container status \"19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12\": rpc error: code = NotFound desc = could not find container \"19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12\": container with ID starting with 19f1a430e385808d5c00fc4e8097332edb8e13573b9c104f558c879497d3aa12 not found: ID does not exist" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.946623 4872 scope.go:117] "RemoveContainer" containerID="f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.946650 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.946776 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.946805 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060118db-a448-4e29-bc32-bfb66dace3a6-logs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.946835 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzsnn\" (UniqueName: \"kubernetes.io/projected/060118db-a448-4e29-bc32-bfb66dace3a6-kube-api-access-vzsnn\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.947071 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-config-data\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.947152 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:48 crc kubenswrapper[4872]: E0203 06:21:48.948759 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb\": container with ID starting with f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb not found: ID does not exist" containerID="f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb" Feb 03 06:21:48 crc kubenswrapper[4872]: I0203 06:21:48.948810 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb"} err="failed to get container status \"f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb\": rpc error: code = NotFound desc = could not find container \"f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb\": container with ID starting with f157a1da5bee2dd2f54a10529a1c920b9c83efd0bf1e912fddf9bf77c7944feb not found: ID does not exist" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.048961 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-config-data\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.049702 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.049736 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.049810 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.049837 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060118db-a448-4e29-bc32-bfb66dace3a6-logs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.049867 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzsnn\" (UniqueName: \"kubernetes.io/projected/060118db-a448-4e29-bc32-bfb66dace3a6-kube-api-access-vzsnn\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.050657 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060118db-a448-4e29-bc32-bfb66dace3a6-logs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.054621 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-config-data\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.054794 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.054828 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.060293 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.076536 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.079337 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzsnn\" (UniqueName: \"kubernetes.io/projected/060118db-a448-4e29-bc32-bfb66dace3a6-kube-api-access-vzsnn\") pod \"nova-api-0\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.107846 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.240041 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.796759 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.835085 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"060118db-a448-4e29-bc32-bfb66dace3a6","Type":"ContainerStarted","Data":"8d33fcc07f3009250ea606c34dec25cb7e5629db46f58cfec21d01c7558f6870"} Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.837273 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"248c9cda-018d-4cea-8dc8-c6a77788155a","Type":"ContainerStarted","Data":"28ce2a16ef519c4e2c182621d570405a33b0e51c5a2c1e7faaa6cb07c06ed683"} Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.837295 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"248c9cda-018d-4cea-8dc8-c6a77788155a","Type":"ContainerStarted","Data":"e6179ec08a99bf38c3f6b26d364a0aec4075538464858cb5c26b6740b7f56a8b"} Feb 03 06:21:49 crc kubenswrapper[4872]: I0203 06:21:49.854672 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.036289 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vxhcz"] Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.037681 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.040547 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.041053 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.046578 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vxhcz"] Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.070281 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.070320 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-config-data\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.070366 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-scripts\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.070463 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhd8\" (UniqueName: \"kubernetes.io/projected/b4d4d4bd-0dd6-422f-87fd-d379c0110294-kube-api-access-xhhd8\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.150155 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ef4752-1c13-4c96-ad78-a9451291fd7a" path="/var/lib/kubelet/pods/89ef4752-1c13-4c96-ad78-a9451291fd7a/volumes" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.172877 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhd8\" (UniqueName: \"kubernetes.io/projected/b4d4d4bd-0dd6-422f-87fd-d379c0110294-kube-api-access-xhhd8\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.173041 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.173062 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-config-data\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.173101 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-scripts\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.180595 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-config-data\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.194348 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-scripts\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.197443 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.206524 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhd8\" (UniqueName: \"kubernetes.io/projected/b4d4d4bd-0dd6-422f-87fd-d379c0110294-kube-api-access-xhhd8\") pod \"nova-cell1-cell-mapping-vxhcz\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.438522 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.851775 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"060118db-a448-4e29-bc32-bfb66dace3a6","Type":"ContainerStarted","Data":"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2"} Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.852278 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"060118db-a448-4e29-bc32-bfb66dace3a6","Type":"ContainerStarted","Data":"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2"} Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.858486 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"248c9cda-018d-4cea-8dc8-c6a77788155a","Type":"ContainerStarted","Data":"077c3ad65eb040360841d99dba1cd61db0bf80a8896e09d0a6555c56e2f73308"} Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.880620 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.880602716 podStartE2EDuration="2.880602716s" podCreationTimestamp="2026-02-03 06:21:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:50.873962094 +0000 UTC m=+1281.456653498" watchObservedRunningTime="2026-02-03 06:21:50.880602716 +0000 UTC m=+1281.463294130" Feb 03 06:21:50 crc kubenswrapper[4872]: I0203 06:21:50.928186 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vxhcz"] Feb 03 06:21:51 crc kubenswrapper[4872]: I0203 06:21:51.871731 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"248c9cda-018d-4cea-8dc8-c6a77788155a","Type":"ContainerStarted","Data":"5a116918976048bf016d29bc9df36789ddc750864cdec9058b758df6efe9cad7"} Feb 03 06:21:51 crc kubenswrapper[4872]: I0203 06:21:51.874563 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vxhcz" event={"ID":"b4d4d4bd-0dd6-422f-87fd-d379c0110294","Type":"ContainerStarted","Data":"d18f02849e401a11832221bbee2d047f86766eaa5c14f6b9aa71eb1fbc608296"} Feb 03 06:21:51 crc kubenswrapper[4872]: I0203 06:21:51.874614 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vxhcz" event={"ID":"b4d4d4bd-0dd6-422f-87fd-d379c0110294","Type":"ContainerStarted","Data":"61ed561a09300c3d2e9c8ebd8247fcc47be9bdb50358977c1663ca6f21386255"} Feb 03 06:21:51 crc kubenswrapper[4872]: I0203 06:21:51.903099 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vxhcz" podStartSLOduration=1.903070122 podStartE2EDuration="1.903070122s" podCreationTimestamp="2026-02-03 06:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:21:51.892896374 +0000 UTC m=+1282.475587798" watchObservedRunningTime="2026-02-03 06:21:51.903070122 +0000 UTC m=+1282.485761576" Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.297812 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.381264 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-c8lj5"] Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.382056 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" podUID="859d7045-14df-4211-bc27-60330376ee2a" containerName="dnsmasq-dns" containerID="cri-o://7378c0e4a6ee9b1950497e1f78ec1f774224cb02249d460a872f1db8b4ddd4db" gracePeriod=10 Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.883443 4872 generic.go:334] "Generic (PLEG): container finished" podID="859d7045-14df-4211-bc27-60330376ee2a" containerID="7378c0e4a6ee9b1950497e1f78ec1f774224cb02249d460a872f1db8b4ddd4db" exitCode=0 Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.884387 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" event={"ID":"859d7045-14df-4211-bc27-60330376ee2a","Type":"ContainerDied","Data":"7378c0e4a6ee9b1950497e1f78ec1f774224cb02249d460a872f1db8b4ddd4db"} Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.884410 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" event={"ID":"859d7045-14df-4211-bc27-60330376ee2a","Type":"ContainerDied","Data":"e19b3cecdfe5d98230aae93d017bcce31af34366574e04e09e0071299b46d604"} Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.884420 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e19b3cecdfe5d98230aae93d017bcce31af34366574e04e09e0071299b46d604" Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.889774 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.927573 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-sb\") pod \"859d7045-14df-4211-bc27-60330376ee2a\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.928301 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-config\") pod \"859d7045-14df-4211-bc27-60330376ee2a\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.928363 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-svc\") pod \"859d7045-14df-4211-bc27-60330376ee2a\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.928403 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-swift-storage-0\") pod \"859d7045-14df-4211-bc27-60330376ee2a\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.928437 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rrc4\" (UniqueName: \"kubernetes.io/projected/859d7045-14df-4211-bc27-60330376ee2a-kube-api-access-8rrc4\") pod \"859d7045-14df-4211-bc27-60330376ee2a\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.928523 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-nb\") pod \"859d7045-14df-4211-bc27-60330376ee2a\" (UID: \"859d7045-14df-4211-bc27-60330376ee2a\") " Feb 03 06:21:52 crc kubenswrapper[4872]: I0203 06:21:52.983851 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859d7045-14df-4211-bc27-60330376ee2a-kube-api-access-8rrc4" (OuterVolumeSpecName: "kube-api-access-8rrc4") pod "859d7045-14df-4211-bc27-60330376ee2a" (UID: "859d7045-14df-4211-bc27-60330376ee2a"). InnerVolumeSpecName "kube-api-access-8rrc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.000563 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "859d7045-14df-4211-bc27-60330376ee2a" (UID: "859d7045-14df-4211-bc27-60330376ee2a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.009281 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "859d7045-14df-4211-bc27-60330376ee2a" (UID: "859d7045-14df-4211-bc27-60330376ee2a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.025997 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "859d7045-14df-4211-bc27-60330376ee2a" (UID: "859d7045-14df-4211-bc27-60330376ee2a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.031591 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.031627 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.031644 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rrc4\" (UniqueName: \"kubernetes.io/projected/859d7045-14df-4211-bc27-60330376ee2a-kube-api-access-8rrc4\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.031655 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.038247 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-config" (OuterVolumeSpecName: "config") pod "859d7045-14df-4211-bc27-60330376ee2a" (UID: "859d7045-14df-4211-bc27-60330376ee2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.044504 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "859d7045-14df-4211-bc27-60330376ee2a" (UID: "859d7045-14df-4211-bc27-60330376ee2a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.133174 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.133407 4872 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859d7045-14df-4211-bc27-60330376ee2a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.895952 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-c8lj5" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.896177 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"248c9cda-018d-4cea-8dc8-c6a77788155a","Type":"ContainerStarted","Data":"b6706e0a68d0d6b8011bffff77da46b6b1251cc14365bf47c48a395676df6e9a"} Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.897995 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.935984 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.772357902 podStartE2EDuration="6.9359612s" podCreationTimestamp="2026-02-03 06:21:47 +0000 UTC" firstStartedPulling="2026-02-03 06:21:48.892788495 +0000 UTC m=+1279.475479909" lastFinishedPulling="2026-02-03 06:21:53.056391793 +0000 UTC m=+1283.639083207" observedRunningTime="2026-02-03 06:21:53.92401307 +0000 UTC m=+1284.506704494" watchObservedRunningTime="2026-02-03 06:21:53.9359612 +0000 UTC m=+1284.518652634" Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.953346 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-c8lj5"] Feb 03 06:21:53 crc kubenswrapper[4872]: I0203 06:21:53.963751 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-c8lj5"] Feb 03 06:21:54 crc kubenswrapper[4872]: I0203 06:21:54.134663 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859d7045-14df-4211-bc27-60330376ee2a" path="/var/lib/kubelet/pods/859d7045-14df-4211-bc27-60330376ee2a/volumes" Feb 03 06:21:56 crc kubenswrapper[4872]: I0203 06:21:56.940227 4872 generic.go:334] "Generic (PLEG): container finished" podID="b4d4d4bd-0dd6-422f-87fd-d379c0110294" containerID="d18f02849e401a11832221bbee2d047f86766eaa5c14f6b9aa71eb1fbc608296" exitCode=0 Feb 03 06:21:56 crc kubenswrapper[4872]: I0203 06:21:56.940502 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vxhcz" event={"ID":"b4d4d4bd-0dd6-422f-87fd-d379c0110294","Type":"ContainerDied","Data":"d18f02849e401a11832221bbee2d047f86766eaa5c14f6b9aa71eb1fbc608296"} Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.343045 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.439620 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-combined-ca-bundle\") pod \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.439709 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-scripts\") pod \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.439748 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhd8\" (UniqueName: \"kubernetes.io/projected/b4d4d4bd-0dd6-422f-87fd-d379c0110294-kube-api-access-xhhd8\") pod \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.439950 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-config-data\") pod \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\" (UID: \"b4d4d4bd-0dd6-422f-87fd-d379c0110294\") " Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.445447 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d4d4bd-0dd6-422f-87fd-d379c0110294-kube-api-access-xhhd8" (OuterVolumeSpecName: "kube-api-access-xhhd8") pod "b4d4d4bd-0dd6-422f-87fd-d379c0110294" (UID: "b4d4d4bd-0dd6-422f-87fd-d379c0110294"). InnerVolumeSpecName "kube-api-access-xhhd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.448744 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-scripts" (OuterVolumeSpecName: "scripts") pod "b4d4d4bd-0dd6-422f-87fd-d379c0110294" (UID: "b4d4d4bd-0dd6-422f-87fd-d379c0110294"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.476525 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-config-data" (OuterVolumeSpecName: "config-data") pod "b4d4d4bd-0dd6-422f-87fd-d379c0110294" (UID: "b4d4d4bd-0dd6-422f-87fd-d379c0110294"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.492926 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4d4d4bd-0dd6-422f-87fd-d379c0110294" (UID: "b4d4d4bd-0dd6-422f-87fd-d379c0110294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.543173 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.543222 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.543244 4872 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d4d4bd-0dd6-422f-87fd-d379c0110294-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.543262 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhhd8\" (UniqueName: \"kubernetes.io/projected/b4d4d4bd-0dd6-422f-87fd-d379c0110294-kube-api-access-xhhd8\") on node \"crc\" DevicePath \"\"" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.963959 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vxhcz" event={"ID":"b4d4d4bd-0dd6-422f-87fd-d379c0110294","Type":"ContainerDied","Data":"61ed561a09300c3d2e9c8ebd8247fcc47be9bdb50358977c1663ca6f21386255"} Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.964250 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ed561a09300c3d2e9c8ebd8247fcc47be9bdb50358977c1663ca6f21386255" Feb 03 06:21:58 crc kubenswrapper[4872]: I0203 06:21:58.964038 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vxhcz" Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.105336 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.105617 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" containerName="nova-api-log" containerID="cri-o://7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2" gracePeriod=30 Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.105710 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" containerName="nova-api-api" containerID="cri-o://7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2" gracePeriod=30 Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.150335 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.150575 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="13bbaa37-3973-4eb7-b03a-6b90547c76b8" containerName="nova-scheduler-scheduler" containerID="cri-o://7d32e03d7e5b3e08c5358e90e0a3d1b04b2a6d93a1bab521915a59ff473640c8" gracePeriod=30 Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.184472 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.184879 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-log" containerID="cri-o://d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401" gracePeriod=30 Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.185314 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-metadata" containerID="cri-o://d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c" gracePeriod=30 Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.845082 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.970403 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-config-data\") pod \"060118db-a448-4e29-bc32-bfb66dace3a6\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.970472 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-internal-tls-certs\") pod \"060118db-a448-4e29-bc32-bfb66dace3a6\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.970522 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-public-tls-certs\") pod \"060118db-a448-4e29-bc32-bfb66dace3a6\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.970548 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060118db-a448-4e29-bc32-bfb66dace3a6-logs\") pod \"060118db-a448-4e29-bc32-bfb66dace3a6\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.970633 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzsnn\" (UniqueName: \"kubernetes.io/projected/060118db-a448-4e29-bc32-bfb66dace3a6-kube-api-access-vzsnn\") pod \"060118db-a448-4e29-bc32-bfb66dace3a6\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.970733 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-combined-ca-bundle\") pod \"060118db-a448-4e29-bc32-bfb66dace3a6\" (UID: \"060118db-a448-4e29-bc32-bfb66dace3a6\") " Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.973072 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/060118db-a448-4e29-bc32-bfb66dace3a6-logs" (OuterVolumeSpecName: "logs") pod "060118db-a448-4e29-bc32-bfb66dace3a6" (UID: "060118db-a448-4e29-bc32-bfb66dace3a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:21:59 crc kubenswrapper[4872]: I0203 06:21:59.992366 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/060118db-a448-4e29-bc32-bfb66dace3a6-kube-api-access-vzsnn" (OuterVolumeSpecName: "kube-api-access-vzsnn") pod "060118db-a448-4e29-bc32-bfb66dace3a6" (UID: "060118db-a448-4e29-bc32-bfb66dace3a6"). InnerVolumeSpecName "kube-api-access-vzsnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:21:59.995600 4872 generic.go:334] "Generic (PLEG): container finished" podID="f0bb52bd-b765-49d2-908e-38755908e575" containerID="d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401" exitCode=143 Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:21:59.995657 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bb52bd-b765-49d2-908e-38755908e575","Type":"ContainerDied","Data":"d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401"} Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.000155 4872 generic.go:334] "Generic (PLEG): container finished" podID="060118db-a448-4e29-bc32-bfb66dace3a6" containerID="7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2" exitCode=0 Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.000189 4872 generic.go:334] "Generic (PLEG): container finished" podID="060118db-a448-4e29-bc32-bfb66dace3a6" containerID="7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2" exitCode=143 Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.000223 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"060118db-a448-4e29-bc32-bfb66dace3a6","Type":"ContainerDied","Data":"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2"} Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.000258 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"060118db-a448-4e29-bc32-bfb66dace3a6","Type":"ContainerDied","Data":"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2"} Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.000269 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"060118db-a448-4e29-bc32-bfb66dace3a6","Type":"ContainerDied","Data":"8d33fcc07f3009250ea606c34dec25cb7e5629db46f58cfec21d01c7558f6870"} Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.000286 4872 scope.go:117] "RemoveContainer" containerID="7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.000452 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.015121 4872 generic.go:334] "Generic (PLEG): container finished" podID="13bbaa37-3973-4eb7-b03a-6b90547c76b8" containerID="7d32e03d7e5b3e08c5358e90e0a3d1b04b2a6d93a1bab521915a59ff473640c8" exitCode=0 Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.015200 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13bbaa37-3973-4eb7-b03a-6b90547c76b8","Type":"ContainerDied","Data":"7d32e03d7e5b3e08c5358e90e0a3d1b04b2a6d93a1bab521915a59ff473640c8"} Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.117230 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-config-data" (OuterVolumeSpecName: "config-data") pod "060118db-a448-4e29-bc32-bfb66dace3a6" (UID: "060118db-a448-4e29-bc32-bfb66dace3a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.117463 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/060118db-a448-4e29-bc32-bfb66dace3a6-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.117476 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzsnn\" (UniqueName: \"kubernetes.io/projected/060118db-a448-4e29-bc32-bfb66dace3a6-kube-api-access-vzsnn\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.117486 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.133598 4872 scope.go:117] "RemoveContainer" containerID="7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.144988 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "060118db-a448-4e29-bc32-bfb66dace3a6" (UID: "060118db-a448-4e29-bc32-bfb66dace3a6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.163066 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "060118db-a448-4e29-bc32-bfb66dace3a6" (UID: "060118db-a448-4e29-bc32-bfb66dace3a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.178981 4872 scope.go:117] "RemoveContainer" containerID="7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2" Feb 03 06:22:00 crc kubenswrapper[4872]: E0203 06:22:00.193903 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2\": container with ID starting with 7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2 not found: ID does not exist" containerID="7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.193961 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2"} err="failed to get container status \"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2\": rpc error: code = NotFound desc = could not find container \"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2\": container with ID starting with 7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2 not found: ID does not exist" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.193997 4872 scope.go:117] "RemoveContainer" containerID="7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.212893 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "060118db-a448-4e29-bc32-bfb66dace3a6" (UID: "060118db-a448-4e29-bc32-bfb66dace3a6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:00 crc kubenswrapper[4872]: E0203 06:22:00.230241 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2\": container with ID starting with 7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2 not found: ID does not exist" containerID="7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.230296 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2"} err="failed to get container status \"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2\": rpc error: code = NotFound desc = could not find container \"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2\": container with ID starting with 7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2 not found: ID does not exist" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.230321 4872 scope.go:117] "RemoveContainer" containerID="7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.235864 4872 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.235915 4872 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.235924 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/060118db-a448-4e29-bc32-bfb66dace3a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.245865 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2"} err="failed to get container status \"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2\": rpc error: code = NotFound desc = could not find container \"7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2\": container with ID starting with 7cc021c64cf52f85d7f8dbea1acbca6b7d0b45b24c07bc318889986ef159a3e2 not found: ID does not exist" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.245912 4872 scope.go:117] "RemoveContainer" containerID="7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.271104 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2"} err="failed to get container status \"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2\": rpc error: code = NotFound desc = could not find container \"7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2\": container with ID starting with 7c5f9183b380c3df2cdf99ef6ef72da8e4f220e19d04752448fbc8615e17f2a2 not found: ID does not exist" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.369784 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.395750 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.445323 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csdd6\" (UniqueName: \"kubernetes.io/projected/13bbaa37-3973-4eb7-b03a-6b90547c76b8-kube-api-access-csdd6\") pod \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.445584 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-config-data\") pod \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.445636 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-combined-ca-bundle\") pod \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\" (UID: \"13bbaa37-3973-4eb7-b03a-6b90547c76b8\") " Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.453501 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.587954 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 06:22:00 crc kubenswrapper[4872]: E0203 06:22:00.589352 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d4d4bd-0dd6-422f-87fd-d379c0110294" containerName="nova-manage" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.589375 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d4d4bd-0dd6-422f-87fd-d379c0110294" containerName="nova-manage" Feb 03 06:22:00 crc kubenswrapper[4872]: E0203 06:22:00.589406 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859d7045-14df-4211-bc27-60330376ee2a" containerName="dnsmasq-dns" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.589412 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="859d7045-14df-4211-bc27-60330376ee2a" containerName="dnsmasq-dns" Feb 03 06:22:00 crc kubenswrapper[4872]: E0203 06:22:00.589434 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bbaa37-3973-4eb7-b03a-6b90547c76b8" containerName="nova-scheduler-scheduler" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.589442 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bbaa37-3973-4eb7-b03a-6b90547c76b8" containerName="nova-scheduler-scheduler" Feb 03 06:22:00 crc kubenswrapper[4872]: E0203 06:22:00.589464 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" containerName="nova-api-api" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.589471 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" containerName="nova-api-api" Feb 03 06:22:00 crc kubenswrapper[4872]: E0203 06:22:00.589492 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" containerName="nova-api-log" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.589500 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" containerName="nova-api-log" Feb 03 06:22:00 crc kubenswrapper[4872]: E0203 06:22:00.589520 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859d7045-14df-4211-bc27-60330376ee2a" containerName="init" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.589530 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="859d7045-14df-4211-bc27-60330376ee2a" containerName="init" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.600093 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" containerName="nova-api-api" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.600166 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="859d7045-14df-4211-bc27-60330376ee2a" containerName="dnsmasq-dns" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.600192 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bbaa37-3973-4eb7-b03a-6b90547c76b8" containerName="nova-scheduler-scheduler" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.600237 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" containerName="nova-api-log" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.600271 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d4d4bd-0dd6-422f-87fd-d379c0110294" containerName="nova-manage" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.610386 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.611140 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13bbaa37-3973-4eb7-b03a-6b90547c76b8" (UID: "13bbaa37-3973-4eb7-b03a-6b90547c76b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.613890 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.616469 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.621336 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bbaa37-3973-4eb7-b03a-6b90547c76b8-kube-api-access-csdd6" (OuterVolumeSpecName: "kube-api-access-csdd6") pod "13bbaa37-3973-4eb7-b03a-6b90547c76b8" (UID: "13bbaa37-3973-4eb7-b03a-6b90547c76b8"). InnerVolumeSpecName "kube-api-access-csdd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.623189 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.631379 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.658218 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csdd6\" (UniqueName: \"kubernetes.io/projected/13bbaa37-3973-4eb7-b03a-6b90547c76b8-kube-api-access-csdd6\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.658262 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.676272 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-config-data" (OuterVolumeSpecName: "config-data") pod "13bbaa37-3973-4eb7-b03a-6b90547c76b8" (UID: "13bbaa37-3973-4eb7-b03a-6b90547c76b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.759747 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-config-data\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.759822 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.759841 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2gt7\" (UniqueName: \"kubernetes.io/projected/baca7029-2f99-49c6-810f-7a25a2a853d0-kube-api-access-t2gt7\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.760279 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-public-tls-certs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.760363 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baca7029-2f99-49c6-810f-7a25a2a853d0-logs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.760420 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.760652 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bbaa37-3973-4eb7-b03a-6b90547c76b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.861980 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-public-tls-certs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.862247 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baca7029-2f99-49c6-810f-7a25a2a853d0-logs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.862272 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.862346 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-config-data\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.862398 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.862415 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2gt7\" (UniqueName: \"kubernetes.io/projected/baca7029-2f99-49c6-810f-7a25a2a853d0-kube-api-access-t2gt7\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.862774 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baca7029-2f99-49c6-810f-7a25a2a853d0-logs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.866314 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.866655 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-public-tls-certs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.869482 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-config-data\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.871191 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baca7029-2f99-49c6-810f-7a25a2a853d0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.881481 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2gt7\" (UniqueName: \"kubernetes.io/projected/baca7029-2f99-49c6-810f-7a25a2a853d0-kube-api-access-t2gt7\") pod \"nova-api-0\" (UID: \"baca7029-2f99-49c6-810f-7a25a2a853d0\") " pod="openstack/nova-api-0" Feb 03 06:22:00 crc kubenswrapper[4872]: I0203 06:22:00.976984 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.026261 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13bbaa37-3973-4eb7-b03a-6b90547c76b8","Type":"ContainerDied","Data":"b132bba7d346cdf25038e86009771350776365f460540db09cc8b6ecf57b5b6d"} Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.026486 4872 scope.go:117] "RemoveContainer" containerID="7d32e03d7e5b3e08c5358e90e0a3d1b04b2a6d93a1bab521915a59ff473640c8" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.026333 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.070520 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.093961 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.115955 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.117379 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.130762 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.142884 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.180839 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a4b0fb-cc06-47b7-b789-9d321718a06c-config-data\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.181172 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a4b0fb-cc06-47b7-b789-9d321718a06c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.181493 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljlbw\" (UniqueName: \"kubernetes.io/projected/e5a4b0fb-cc06-47b7-b789-9d321718a06c-kube-api-access-ljlbw\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.274779 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.274853 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.274896 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.275543 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f33fa16560568ffcc087b44ca5f1c955596c896bcd4662e8ab64b8586efed14"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.275592 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://3f33fa16560568ffcc087b44ca5f1c955596c896bcd4662e8ab64b8586efed14" gracePeriod=600 Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.283756 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a4b0fb-cc06-47b7-b789-9d321718a06c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.283829 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljlbw\" (UniqueName: \"kubernetes.io/projected/e5a4b0fb-cc06-47b7-b789-9d321718a06c-kube-api-access-ljlbw\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.283940 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a4b0fb-cc06-47b7-b789-9d321718a06c-config-data\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.289665 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a4b0fb-cc06-47b7-b789-9d321718a06c-config-data\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.295257 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a4b0fb-cc06-47b7-b789-9d321718a06c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.314918 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljlbw\" (UniqueName: \"kubernetes.io/projected/e5a4b0fb-cc06-47b7-b789-9d321718a06c-kube-api-access-ljlbw\") pod \"nova-scheduler-0\" (UID: \"e5a4b0fb-cc06-47b7-b789-9d321718a06c\") " pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.448374 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.541307 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 06:22:01 crc kubenswrapper[4872]: W0203 06:22:01.552088 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaca7029_2f99_49c6_810f_7a25a2a853d0.slice/crio-0858911a56bae469bebb37c18d3cebc956c7aa75236503a3acaee7b0fce9d7ef WatchSource:0}: Error finding container 0858911a56bae469bebb37c18d3cebc956c7aa75236503a3acaee7b0fce9d7ef: Status 404 returned error can't find the container with id 0858911a56bae469bebb37c18d3cebc956c7aa75236503a3acaee7b0fce9d7ef Feb 03 06:22:01 crc kubenswrapper[4872]: I0203 06:22:01.944697 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.039156 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"baca7029-2f99-49c6-810f-7a25a2a853d0","Type":"ContainerStarted","Data":"da3b6a675fb9043bf2efe8a9cd24f3bd59068d4e38d1cef6b25429c21542760a"} Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.039318 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"baca7029-2f99-49c6-810f-7a25a2a853d0","Type":"ContainerStarted","Data":"3e9b041729aae0cc16d3a6bcbb88d87c3f1bf187e298c7ec92ca72a2dab33e15"} Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.039332 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"baca7029-2f99-49c6-810f-7a25a2a853d0","Type":"ContainerStarted","Data":"0858911a56bae469bebb37c18d3cebc956c7aa75236503a3acaee7b0fce9d7ef"} Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.041353 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5a4b0fb-cc06-47b7-b789-9d321718a06c","Type":"ContainerStarted","Data":"d76dbc0fa57a6ad2b5e6cbeece340c90da64f5b4df4bb99e0d0f9dbef14deccc"} Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.043663 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="3f33fa16560568ffcc087b44ca5f1c955596c896bcd4662e8ab64b8586efed14" exitCode=0 Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.043749 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"3f33fa16560568ffcc087b44ca5f1c955596c896bcd4662e8ab64b8586efed14"} Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.043767 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714"} Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.043783 4872 scope.go:117] "RemoveContainer" containerID="1ee0abf72dd022e7907c6192d8075ae69f194cd75ecc0b9f792ce2b1786381c8" Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.064735 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.064613623 podStartE2EDuration="2.064613623s" podCreationTimestamp="2026-02-03 06:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:22:02.055798577 +0000 UTC m=+1292.638489991" watchObservedRunningTime="2026-02-03 06:22:02.064613623 +0000 UTC m=+1292.647305067" Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.136054 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="060118db-a448-4e29-bc32-bfb66dace3a6" path="/var/lib/kubelet/pods/060118db-a448-4e29-bc32-bfb66dace3a6/volumes" Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.137062 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bbaa37-3973-4eb7-b03a-6b90547c76b8" path="/var/lib/kubelet/pods/13bbaa37-3973-4eb7-b03a-6b90547c76b8/volumes" Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.337205 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:46526->10.217.0.193:8775: read: connection reset by peer" Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.337328 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:46524->10.217.0.193:8775: read: connection reset by peer" Feb 03 06:22:02 crc kubenswrapper[4872]: I0203 06:22:02.878452 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.014413 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n9s8\" (UniqueName: \"kubernetes.io/projected/f0bb52bd-b765-49d2-908e-38755908e575-kube-api-access-2n9s8\") pod \"f0bb52bd-b765-49d2-908e-38755908e575\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.014549 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-combined-ca-bundle\") pod \"f0bb52bd-b765-49d2-908e-38755908e575\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.014603 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-config-data\") pod \"f0bb52bd-b765-49d2-908e-38755908e575\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.014665 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb52bd-b765-49d2-908e-38755908e575-logs\") pod \"f0bb52bd-b765-49d2-908e-38755908e575\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.014739 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-nova-metadata-tls-certs\") pod \"f0bb52bd-b765-49d2-908e-38755908e575\" (UID: \"f0bb52bd-b765-49d2-908e-38755908e575\") " Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.026302 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0bb52bd-b765-49d2-908e-38755908e575-logs" (OuterVolumeSpecName: "logs") pod "f0bb52bd-b765-49d2-908e-38755908e575" (UID: "f0bb52bd-b765-49d2-908e-38755908e575"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.026866 4872 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb52bd-b765-49d2-908e-38755908e575-logs\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.088776 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bb52bd-b765-49d2-908e-38755908e575-kube-api-access-2n9s8" (OuterVolumeSpecName: "kube-api-access-2n9s8") pod "f0bb52bd-b765-49d2-908e-38755908e575" (UID: "f0bb52bd-b765-49d2-908e-38755908e575"). InnerVolumeSpecName "kube-api-access-2n9s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.117102 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e5a4b0fb-cc06-47b7-b789-9d321718a06c","Type":"ContainerStarted","Data":"51ddb3a0efaddd8d981e95a81210cad5fbf42f7112a505e4af380608be537eb2"} Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.133948 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n9s8\" (UniqueName: \"kubernetes.io/projected/f0bb52bd-b765-49d2-908e-38755908e575-kube-api-access-2n9s8\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.165163 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.165146018 podStartE2EDuration="2.165146018s" podCreationTimestamp="2026-02-03 06:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:22:03.162551565 +0000 UTC m=+1293.745242979" watchObservedRunningTime="2026-02-03 06:22:03.165146018 +0000 UTC m=+1293.747837432" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.185005 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0bb52bd-b765-49d2-908e-38755908e575" (UID: "f0bb52bd-b765-49d2-908e-38755908e575"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.185308 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-config-data" (OuterVolumeSpecName: "config-data") pod "f0bb52bd-b765-49d2-908e-38755908e575" (UID: "f0bb52bd-b765-49d2-908e-38755908e575"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.201948 4872 generic.go:334] "Generic (PLEG): container finished" podID="f0bb52bd-b765-49d2-908e-38755908e575" containerID="d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c" exitCode=0 Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.203122 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.203679 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bb52bd-b765-49d2-908e-38755908e575","Type":"ContainerDied","Data":"d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c"} Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.203729 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0bb52bd-b765-49d2-908e-38755908e575","Type":"ContainerDied","Data":"53799548c90f9756aa2c0f4fc91f83e0d8a69e6ab5592e2f633e306eb2156081"} Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.203745 4872 scope.go:117] "RemoveContainer" containerID="d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.239897 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.239929 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.338862 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f0bb52bd-b765-49d2-908e-38755908e575" (UID: "f0bb52bd-b765-49d2-908e-38755908e575"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.341800 4872 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0bb52bd-b765-49d2-908e-38755908e575-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.344313 4872 scope.go:117] "RemoveContainer" containerID="d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.422419 4872 scope.go:117] "RemoveContainer" containerID="d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c" Feb 03 06:22:03 crc kubenswrapper[4872]: E0203 06:22:03.423049 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c\": container with ID starting with d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c not found: ID does not exist" containerID="d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.423093 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c"} err="failed to get container status \"d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c\": rpc error: code = NotFound desc = could not find container \"d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c\": container with ID starting with d06da934800f1779eb9d4c475d795ac64258f70205286182ab36228c7772b45c not found: ID does not exist" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.423118 4872 scope.go:117] "RemoveContainer" containerID="d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401" Feb 03 06:22:03 crc kubenswrapper[4872]: E0203 06:22:03.423351 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401\": container with ID starting with d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401 not found: ID does not exist" containerID="d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.423375 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401"} err="failed to get container status \"d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401\": rpc error: code = NotFound desc = could not find container \"d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401\": container with ID starting with d69a36717637478c7ea68ef4d961c4b598c22e7063a0ebf372a832827b77a401 not found: ID does not exist" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.530379 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.541820 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.561239 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:22:03 crc kubenswrapper[4872]: E0203 06:22:03.561700 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-metadata" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.561715 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-metadata" Feb 03 06:22:03 crc kubenswrapper[4872]: E0203 06:22:03.561729 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-log" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.561735 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-log" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.561902 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-metadata" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.561923 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bb52bd-b765-49d2-908e-38755908e575" containerName="nova-metadata-log" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.562807 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.564863 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.565045 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.576395 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.647902 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.647960 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.648020 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-config-data\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.648062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4j7g\" (UniqueName: \"kubernetes.io/projected/96d27a75-4427-4ff9-82ad-4672a9d403da-kube-api-access-f4j7g\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.648088 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96d27a75-4427-4ff9-82ad-4672a9d403da-logs\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.749666 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-config-data\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.749771 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4j7g\" (UniqueName: \"kubernetes.io/projected/96d27a75-4427-4ff9-82ad-4672a9d403da-kube-api-access-f4j7g\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.749801 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96d27a75-4427-4ff9-82ad-4672a9d403da-logs\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.749858 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.749892 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.750502 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96d27a75-4427-4ff9-82ad-4672a9d403da-logs\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.753907 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.754393 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-config-data\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.764269 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d27a75-4427-4ff9-82ad-4672a9d403da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.776935 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4j7g\" (UniqueName: \"kubernetes.io/projected/96d27a75-4427-4ff9-82ad-4672a9d403da-kube-api-access-f4j7g\") pod \"nova-metadata-0\" (UID: \"96d27a75-4427-4ff9-82ad-4672a9d403da\") " pod="openstack/nova-metadata-0" Feb 03 06:22:03 crc kubenswrapper[4872]: I0203 06:22:03.877947 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 06:22:04 crc kubenswrapper[4872]: I0203 06:22:04.135020 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bb52bd-b765-49d2-908e-38755908e575" path="/var/lib/kubelet/pods/f0bb52bd-b765-49d2-908e-38755908e575/volumes" Feb 03 06:22:04 crc kubenswrapper[4872]: I0203 06:22:04.381719 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 06:22:05 crc kubenswrapper[4872]: I0203 06:22:05.225204 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96d27a75-4427-4ff9-82ad-4672a9d403da","Type":"ContainerStarted","Data":"acd1f485cf4340cff3c7924178d2663a2581e95754285312b171b2e9b2c32825"} Feb 03 06:22:05 crc kubenswrapper[4872]: I0203 06:22:05.225540 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96d27a75-4427-4ff9-82ad-4672a9d403da","Type":"ContainerStarted","Data":"131a46eb530405aa3fe38ebde8c4629475a02a8a612057f4beee4ab4bc897df3"} Feb 03 06:22:05 crc kubenswrapper[4872]: I0203 06:22:05.225553 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"96d27a75-4427-4ff9-82ad-4672a9d403da","Type":"ContainerStarted","Data":"482349c4583c7628e9fb82c6538aaf5cabb15f260292605af58a03e8d2ccbf1b"} Feb 03 06:22:05 crc kubenswrapper[4872]: I0203 06:22:05.262034 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.262015913 podStartE2EDuration="2.262015913s" podCreationTimestamp="2026-02-03 06:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:22:05.261261755 +0000 UTC m=+1295.843953209" watchObservedRunningTime="2026-02-03 06:22:05.262015913 +0000 UTC m=+1295.844707337" Feb 03 06:22:06 crc kubenswrapper[4872]: I0203 06:22:06.448783 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 06:22:08 crc kubenswrapper[4872]: I0203 06:22:08.878556 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 06:22:08 crc kubenswrapper[4872]: I0203 06:22:08.879102 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 06:22:10 crc kubenswrapper[4872]: I0203 06:22:10.977946 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 06:22:10 crc kubenswrapper[4872]: I0203 06:22:10.978725 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 06:22:11 crc kubenswrapper[4872]: I0203 06:22:11.448991 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 06:22:11 crc kubenswrapper[4872]: I0203 06:22:11.488209 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 06:22:11 crc kubenswrapper[4872]: I0203 06:22:11.995853 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="baca7029-2f99-49c6-810f-7a25a2a853d0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:22:11 crc kubenswrapper[4872]: I0203 06:22:11.995883 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="baca7029-2f99-49c6-810f-7a25a2a853d0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:22:12 crc kubenswrapper[4872]: I0203 06:22:12.320750 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 06:22:13 crc kubenswrapper[4872]: I0203 06:22:13.878765 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 06:22:13 crc kubenswrapper[4872]: I0203 06:22:13.878838 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 06:22:14 crc kubenswrapper[4872]: I0203 06:22:14.896934 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96d27a75-4427-4ff9-82ad-4672a9d403da" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:22:14 crc kubenswrapper[4872]: I0203 06:22:14.896970 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="96d27a75-4427-4ff9-82ad-4672a9d403da" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 06:22:18 crc kubenswrapper[4872]: I0203 06:22:18.393590 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 06:22:20 crc kubenswrapper[4872]: I0203 06:22:20.982479 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 06:22:20 crc kubenswrapper[4872]: I0203 06:22:20.983247 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 06:22:20 crc kubenswrapper[4872]: I0203 06:22:20.984225 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 06:22:20 crc kubenswrapper[4872]: I0203 06:22:20.994593 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 06:22:21 crc kubenswrapper[4872]: I0203 06:22:21.385608 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 06:22:21 crc kubenswrapper[4872]: I0203 06:22:21.396334 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 06:22:23 crc kubenswrapper[4872]: I0203 06:22:23.892213 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 06:22:23 crc kubenswrapper[4872]: I0203 06:22:23.898394 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 06:22:23 crc kubenswrapper[4872]: I0203 06:22:23.903545 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 06:22:24 crc kubenswrapper[4872]: I0203 06:22:24.423378 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 06:22:33 crc kubenswrapper[4872]: I0203 06:22:33.065453 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:22:33 crc kubenswrapper[4872]: I0203 06:22:33.808825 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:22:38 crc kubenswrapper[4872]: I0203 06:22:38.395599 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerName="rabbitmq" containerID="cri-o://e151a3e574fcc4165fe498d721ea9e0b391cb3304402e14533f354c952fc43d9" gracePeriod=604796 Feb 03 06:22:38 crc kubenswrapper[4872]: I0203 06:22:38.678967 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerName="rabbitmq" containerID="cri-o://20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84" gracePeriod=604795 Feb 03 06:22:41 crc kubenswrapper[4872]: I0203 06:22:41.221830 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Feb 03 06:22:41 crc kubenswrapper[4872]: I0203 06:22:41.629340 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 03 06:22:44 crc kubenswrapper[4872]: I0203 06:22:44.629788 4872 generic.go:334] "Generic (PLEG): container finished" podID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerID="e151a3e574fcc4165fe498d721ea9e0b391cb3304402e14533f354c952fc43d9" exitCode=0 Feb 03 06:22:44 crc kubenswrapper[4872]: I0203 06:22:44.629944 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39c1e000-2f81-4251-a9b5-28563d87bb93","Type":"ContainerDied","Data":"e151a3e574fcc4165fe498d721ea9e0b391cb3304402e14533f354c952fc43d9"} Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.053300 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.131612 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-plugins-conf\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135070 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-erlang-cookie\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.133299 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135165 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c1e000-2f81-4251-a9b5-28563d87bb93-erlang-cookie-secret\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135202 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135240 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-tls\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135284 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-plugins\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135316 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-confd\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135399 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-server-conf\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135426 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c1e000-2f81-4251-a9b5-28563d87bb93-pod-info\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135445 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4dbs\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-kube-api-access-z4dbs\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135507 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-config-data\") pod \"39c1e000-2f81-4251-a9b5-28563d87bb93\" (UID: \"39c1e000-2f81-4251-a9b5-28563d87bb93\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.135629 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.136098 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.136459 4872 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.136485 4872 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.136498 4872 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.146847 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.150836 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c1e000-2f81-4251-a9b5-28563d87bb93-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.151640 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.151670 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-kube-api-access-z4dbs" (OuterVolumeSpecName: "kube-api-access-z4dbs") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "kube-api-access-z4dbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.152076 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/39c1e000-2f81-4251-a9b5-28563d87bb93-pod-info" (OuterVolumeSpecName: "pod-info") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.234974 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-config-data" (OuterVolumeSpecName: "config-data") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.238154 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.238188 4872 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/39c1e000-2f81-4251-a9b5-28563d87bb93-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.238216 4872 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.238228 4872 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.238239 4872 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/39c1e000-2f81-4251-a9b5-28563d87bb93-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.238251 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4dbs\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-kube-api-access-z4dbs\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.248351 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.285634 4872 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.290412 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-server-conf" (OuterVolumeSpecName: "server-conf") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.341556 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-tls\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.341608 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-config-data\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.341645 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xhqb\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-kube-api-access-2xhqb\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.341743 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.341776 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-server-conf\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.341880 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-pod-info\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.344830 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-erlang-cookie\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.344887 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-plugins-conf\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.344927 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-erlang-cookie-secret\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.344969 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-plugins\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.345072 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-confd\") pod \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\" (UID: \"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659\") " Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.345771 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.347420 4872 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/39c1e000-2f81-4251-a9b5-28563d87bb93-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.347453 4872 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.347469 4872 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.349519 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.350172 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.357954 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-kube-api-access-2xhqb" (OuterVolumeSpecName: "kube-api-access-2xhqb") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "kube-api-access-2xhqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.358137 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.358772 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-pod-info" (OuterVolumeSpecName: "pod-info") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.360814 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.368210 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.398304 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-config-data" (OuterVolumeSpecName: "config-data") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.400636 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "39c1e000-2f81-4251-a9b5-28563d87bb93" (UID: "39c1e000-2f81-4251-a9b5-28563d87bb93"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.431010 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-server-conf" (OuterVolumeSpecName: "server-conf") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:22:45 crc kubenswrapper[4872]: I0203 06:22:45.452318 4872 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452810 4872 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452826 4872 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452835 4872 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452845 4872 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/39c1e000-2f81-4251-a9b5-28563d87bb93-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452854 4872 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452862 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452871 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xhqb\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-kube-api-access-2xhqb\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452901 4872 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.452910 4872 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.515412 4872 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.538343 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" (UID: "b3e0a9c0-be7d-41fe-b216-aa18c0d2d659"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.555086 4872 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.555110 4872 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.647288 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.651131 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"39c1e000-2f81-4251-a9b5-28563d87bb93","Type":"ContainerDied","Data":"13467ab91326a20c0fd1da8d44b996d1e52ab40495e71f4d43a47532486544a2"} Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.651188 4872 scope.go:117] "RemoveContainer" containerID="e151a3e574fcc4165fe498d721ea9e0b391cb3304402e14533f354c952fc43d9" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.673874 4872 generic.go:334] "Generic (PLEG): container finished" podID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerID="20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84" exitCode=0 Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.673914 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659","Type":"ContainerDied","Data":"20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84"} Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.673968 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3e0a9c0-be7d-41fe-b216-aa18c0d2d659","Type":"ContainerDied","Data":"3bf4cd0120b3bb1a3591777f7f00289702a5a09145eebcbcae6346bb1a1cef80"} Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.674105 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.778774 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.799304 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.805949 4872 scope.go:117] "RemoveContainer" containerID="f04f5897ce704aabeaac3b226dd080524eb8f6014a4fa89c680a2b95c908d017" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.808272 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.825583 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.856364 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: E0203 06:22:45.857364 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerName="rabbitmq" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.857379 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerName="rabbitmq" Feb 03 06:22:46 crc kubenswrapper[4872]: E0203 06:22:45.857430 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerName="rabbitmq" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.857439 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerName="rabbitmq" Feb 03 06:22:46 crc kubenswrapper[4872]: E0203 06:22:45.857458 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerName="setup-container" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.857464 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerName="setup-container" Feb 03 06:22:46 crc kubenswrapper[4872]: E0203 06:22:45.857495 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerName="setup-container" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.857501 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerName="setup-container" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.857740 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" containerName="rabbitmq" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.857762 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" containerName="rabbitmq" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.859374 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.862513 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.862749 4872 scope.go:117] "RemoveContainer" containerID="20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.863179 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l8fdn" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.863592 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.863822 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.864007 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.864631 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.866963 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.867776 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.873081 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.880156 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.881213 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.881442 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.881479 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.881585 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.881657 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r85vx" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.883417 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.883669 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.893020 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.939958 4872 scope.go:117] "RemoveContainer" containerID="b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970339 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8h99\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-kube-api-access-j8h99\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970421 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970468 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970520 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d044870-6de9-4816-b9e2-249371dc40e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970569 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970642 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970668 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970711 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970734 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970761 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970865 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970909 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee583ece-3623-4df4-b879-4ab45489bb07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970938 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d044870-6de9-4816-b9e2-249371dc40e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970965 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.970996 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee583ece-3623-4df4-b879-4ab45489bb07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.971053 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.971084 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.971108 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.971129 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.971152 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.971192 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.971220 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmc2h\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-kube-api-access-fmc2h\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.973957 4872 scope.go:117] "RemoveContainer" containerID="20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84" Feb 03 06:22:46 crc kubenswrapper[4872]: E0203 06:22:45.974371 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84\": container with ID starting with 20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84 not found: ID does not exist" containerID="20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.974402 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84"} err="failed to get container status \"20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84\": rpc error: code = NotFound desc = could not find container \"20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84\": container with ID starting with 20c0310a214072584b397e62f6aee004429ed3c41f5c3caaae5b1019901eae84 not found: ID does not exist" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.974427 4872 scope.go:117] "RemoveContainer" containerID="b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb" Feb 03 06:22:46 crc kubenswrapper[4872]: E0203 06:22:45.974644 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb\": container with ID starting with b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb not found: ID does not exist" containerID="b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:45.974666 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb"} err="failed to get container status \"b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb\": rpc error: code = NotFound desc = could not find container \"b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb\": container with ID starting with b37fc5c1368f715c320e7213c4e68ba61183549e35584ca1786ee8e9bb2b0efb not found: ID does not exist" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073216 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d044870-6de9-4816-b9e2-249371dc40e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073263 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073294 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073322 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073352 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073374 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073404 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073451 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073494 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee583ece-3623-4df4-b879-4ab45489bb07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073523 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d044870-6de9-4816-b9e2-249371dc40e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073543 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073571 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee583ece-3623-4df4-b879-4ab45489bb07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073591 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073618 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073643 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073666 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073709 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073756 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073790 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmc2h\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-kube-api-access-fmc2h\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073845 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8h99\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-kube-api-access-j8h99\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073876 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.073896 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.075371 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.075476 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.076315 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.076384 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.076409 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.077802 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-config-data\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.078239 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.078539 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.078545 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.079111 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.082774 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ee583ece-3623-4df4-b879-4ab45489bb07-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.083083 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d044870-6de9-4816-b9e2-249371dc40e6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.093177 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d044870-6de9-4816-b9e2-249371dc40e6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.093855 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d044870-6de9-4816-b9e2-249371dc40e6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.094885 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.095104 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ee583ece-3623-4df4-b879-4ab45489bb07-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.097414 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ee583ece-3623-4df4-b879-4ab45489bb07-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.098492 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmc2h\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-kube-api-access-fmc2h\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.099909 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.110400 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.115968 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d044870-6de9-4816-b9e2-249371dc40e6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.121442 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8h99\" (UniqueName: \"kubernetes.io/projected/ee583ece-3623-4df4-b879-4ab45489bb07-kube-api-access-j8h99\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.125930 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0d044870-6de9-4816-b9e2-249371dc40e6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.141072 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c1e000-2f81-4251-a9b5-28563d87bb93" path="/var/lib/kubelet/pods/39c1e000-2f81-4251-a9b5-28563d87bb93/volumes" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.142072 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3e0a9c0-be7d-41fe-b216-aa18c0d2d659" path="/var/lib/kubelet/pods/b3e0a9c0-be7d-41fe-b216-aa18c0d2d659/volumes" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.187227 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"ee583ece-3623-4df4-b879-4ab45489bb07\") " pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.222356 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.238099 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.767630 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 06:22:46 crc kubenswrapper[4872]: I0203 06:22:46.781492 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 06:22:47 crc kubenswrapper[4872]: I0203 06:22:47.707316 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee583ece-3623-4df4-b879-4ab45489bb07","Type":"ContainerStarted","Data":"e3da852ee01898cc3a4d70df5d231cc4e3a192de02b6f4bdc1ae5b8a2744e24c"} Feb 03 06:22:47 crc kubenswrapper[4872]: I0203 06:22:47.711071 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0d044870-6de9-4816-b9e2-249371dc40e6","Type":"ContainerStarted","Data":"bd71a87a180900107ed809f4e9a468b466f44a865eae5028599ca6e991cd641a"} Feb 03 06:22:48 crc kubenswrapper[4872]: I0203 06:22:48.722901 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee583ece-3623-4df4-b879-4ab45489bb07","Type":"ContainerStarted","Data":"60e3185ec45c7b06aa46e0bdf20e208abb8c15187579311810a296d1395c2b99"} Feb 03 06:22:48 crc kubenswrapper[4872]: I0203 06:22:48.725819 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0d044870-6de9-4816-b9e2-249371dc40e6","Type":"ContainerStarted","Data":"00644450882fba3db1fac58ccb11617bd945091890881efe5d5cb0a099888019"} Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.274785 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-5ch9f"] Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.276219 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.280226 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.288989 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-5ch9f"] Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.434572 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-svc\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.434646 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.434728 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f929j\" (UniqueName: \"kubernetes.io/projected/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-kube-api-access-f929j\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.434754 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.434949 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.435031 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.435096 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-config\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.537522 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.537934 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f929j\" (UniqueName: \"kubernetes.io/projected/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-kube-api-access-f929j\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.537978 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.538052 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.538084 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.538118 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-config\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.538209 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-svc\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.539187 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-svc\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.539868 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.540837 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.541527 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.542184 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.542825 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-config\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.557133 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f929j\" (UniqueName: \"kubernetes.io/projected/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-kube-api-access-f929j\") pod \"dnsmasq-dns-d558885bc-5ch9f\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:49 crc kubenswrapper[4872]: I0203 06:22:49.595920 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:50 crc kubenswrapper[4872]: I0203 06:22:50.106348 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-5ch9f"] Feb 03 06:22:50 crc kubenswrapper[4872]: I0203 06:22:50.750113 4872 generic.go:334] "Generic (PLEG): container finished" podID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" containerID="98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7" exitCode=0 Feb 03 06:22:50 crc kubenswrapper[4872]: I0203 06:22:50.750448 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" event={"ID":"793ae5ce-8d4d-4d73-9e5e-517021a84e3b","Type":"ContainerDied","Data":"98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7"} Feb 03 06:22:50 crc kubenswrapper[4872]: I0203 06:22:50.750486 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" event={"ID":"793ae5ce-8d4d-4d73-9e5e-517021a84e3b","Type":"ContainerStarted","Data":"61f04230a500420a326e7fd8e55166f2633e2c4d6e59459874cacb393726c0fe"} Feb 03 06:22:51 crc kubenswrapper[4872]: I0203 06:22:51.763583 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" event={"ID":"793ae5ce-8d4d-4d73-9e5e-517021a84e3b","Type":"ContainerStarted","Data":"3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521"} Feb 03 06:22:51 crc kubenswrapper[4872]: I0203 06:22:51.763854 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:51 crc kubenswrapper[4872]: I0203 06:22:51.792516 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" podStartSLOduration=2.792490278 podStartE2EDuration="2.792490278s" podCreationTimestamp="2026-02-03 06:22:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:22:51.789067634 +0000 UTC m=+1342.371759058" watchObservedRunningTime="2026-02-03 06:22:51.792490278 +0000 UTC m=+1342.375181712" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.597989 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.696685 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-vfqsm"] Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.696962 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" podUID="d1151951-b500-49fb-99b3-23c375b6f493" containerName="dnsmasq-dns" containerID="cri-o://9b97138e5d8999006d6b381f0be9e891c88df9673435649f2f6888f63c8b06e4" gracePeriod=10 Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.838862 4872 generic.go:334] "Generic (PLEG): container finished" podID="d1151951-b500-49fb-99b3-23c375b6f493" containerID="9b97138e5d8999006d6b381f0be9e891c88df9673435649f2f6888f63c8b06e4" exitCode=0 Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.839192 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" event={"ID":"d1151951-b500-49fb-99b3-23c375b6f493","Type":"ContainerDied","Data":"9b97138e5d8999006d6b381f0be9e891c88df9673435649f2f6888f63c8b06e4"} Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.904867 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-746ln"] Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.906525 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.932022 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-746ln"] Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.954729 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.954771 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.954801 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.954824 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-config\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.954870 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.954898 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6dgh\" (UniqueName: \"kubernetes.io/projected/c73cade5-ebf2-4b32-9eec-efbc6a089cee-kube-api-access-d6dgh\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:22:59 crc kubenswrapper[4872]: I0203 06:22:59.954921 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.056914 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.056954 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.056988 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.057012 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-config\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.057060 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.057086 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6dgh\" (UniqueName: \"kubernetes.io/projected/c73cade5-ebf2-4b32-9eec-efbc6a089cee-kube-api-access-d6dgh\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.057111 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.057839 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.057895 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.058344 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-config\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.059334 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.059683 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.060333 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c73cade5-ebf2-4b32-9eec-efbc6a089cee-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.078213 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6dgh\" (UniqueName: \"kubernetes.io/projected/c73cade5-ebf2-4b32-9eec-efbc6a089cee-kube-api-access-d6dgh\") pod \"dnsmasq-dns-67cb876dc9-746ln\" (UID: \"c73cade5-ebf2-4b32-9eec-efbc6a089cee\") " pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.244443 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.406420 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.464388 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-nb\") pod \"d1151951-b500-49fb-99b3-23c375b6f493\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.464433 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr6mc\" (UniqueName: \"kubernetes.io/projected/d1151951-b500-49fb-99b3-23c375b6f493-kube-api-access-fr6mc\") pod \"d1151951-b500-49fb-99b3-23c375b6f493\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.464470 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-svc\") pod \"d1151951-b500-49fb-99b3-23c375b6f493\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.464525 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-swift-storage-0\") pod \"d1151951-b500-49fb-99b3-23c375b6f493\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.464613 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-config\") pod \"d1151951-b500-49fb-99b3-23c375b6f493\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.464646 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-sb\") pod \"d1151951-b500-49fb-99b3-23c375b6f493\" (UID: \"d1151951-b500-49fb-99b3-23c375b6f493\") " Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.490868 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1151951-b500-49fb-99b3-23c375b6f493-kube-api-access-fr6mc" (OuterVolumeSpecName: "kube-api-access-fr6mc") pod "d1151951-b500-49fb-99b3-23c375b6f493" (UID: "d1151951-b500-49fb-99b3-23c375b6f493"). InnerVolumeSpecName "kube-api-access-fr6mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.566869 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr6mc\" (UniqueName: \"kubernetes.io/projected/d1151951-b500-49fb-99b3-23c375b6f493-kube-api-access-fr6mc\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.594241 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1151951-b500-49fb-99b3-23c375b6f493" (UID: "d1151951-b500-49fb-99b3-23c375b6f493"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.594800 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-config" (OuterVolumeSpecName: "config") pod "d1151951-b500-49fb-99b3-23c375b6f493" (UID: "d1151951-b500-49fb-99b3-23c375b6f493"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.595422 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1151951-b500-49fb-99b3-23c375b6f493" (UID: "d1151951-b500-49fb-99b3-23c375b6f493"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.627821 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1151951-b500-49fb-99b3-23c375b6f493" (UID: "d1151951-b500-49fb-99b3-23c375b6f493"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.628340 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1151951-b500-49fb-99b3-23c375b6f493" (UID: "d1151951-b500-49fb-99b3-23c375b6f493"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.668665 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.668702 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.668713 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.668721 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.668729 4872 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1151951-b500-49fb-99b3-23c375b6f493-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.856660 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" event={"ID":"d1151951-b500-49fb-99b3-23c375b6f493","Type":"ContainerDied","Data":"36927aa1665ad21cb10587cf1538c32aac6ec93240a2ccf4d3b1bb9b5e0816e5"} Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.856742 4872 scope.go:117] "RemoveContainer" containerID="9b97138e5d8999006d6b381f0be9e891c88df9673435649f2f6888f63c8b06e4" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.856870 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-vfqsm" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.919932 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-vfqsm"] Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.929894 4872 scope.go:117] "RemoveContainer" containerID="43c3a070f464dd41025b59d6a4b616dc66cef0e30dd20088fd4b7014491fd718" Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.938993 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-vfqsm"] Feb 03 06:23:00 crc kubenswrapper[4872]: I0203 06:23:00.962521 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-746ln"] Feb 03 06:23:01 crc kubenswrapper[4872]: I0203 06:23:01.866201 4872 generic.go:334] "Generic (PLEG): container finished" podID="c73cade5-ebf2-4b32-9eec-efbc6a089cee" containerID="6552c74c31f5774c4782896b52953f18db494c1fc5207c541d881ac56f571520" exitCode=0 Feb 03 06:23:01 crc kubenswrapper[4872]: I0203 06:23:01.866262 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-746ln" event={"ID":"c73cade5-ebf2-4b32-9eec-efbc6a089cee","Type":"ContainerDied","Data":"6552c74c31f5774c4782896b52953f18db494c1fc5207c541d881ac56f571520"} Feb 03 06:23:01 crc kubenswrapper[4872]: I0203 06:23:01.866674 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-746ln" event={"ID":"c73cade5-ebf2-4b32-9eec-efbc6a089cee","Type":"ContainerStarted","Data":"2d0c6d330bf7c8f5168a0b2ec2268da75fa3412b2fd61dfbe0556cce7ab747d6"} Feb 03 06:23:02 crc kubenswrapper[4872]: I0203 06:23:02.132681 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1151951-b500-49fb-99b3-23c375b6f493" path="/var/lib/kubelet/pods/d1151951-b500-49fb-99b3-23c375b6f493/volumes" Feb 03 06:23:02 crc kubenswrapper[4872]: I0203 06:23:02.877819 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-746ln" event={"ID":"c73cade5-ebf2-4b32-9eec-efbc6a089cee","Type":"ContainerStarted","Data":"cdc95c128ce4c39d9d50e3fabd6b86e8a85dd0259ddd22cb1eeb517dd4d3a994"} Feb 03 06:23:02 crc kubenswrapper[4872]: I0203 06:23:02.877949 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:02 crc kubenswrapper[4872]: I0203 06:23:02.900890 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67cb876dc9-746ln" podStartSLOduration=3.900866521 podStartE2EDuration="3.900866521s" podCreationTimestamp="2026-02-03 06:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:23:02.894927145 +0000 UTC m=+1353.477618569" watchObservedRunningTime="2026-02-03 06:23:02.900866521 +0000 UTC m=+1353.483557935" Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.246355 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67cb876dc9-746ln" Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.315472 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-5ch9f"] Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.323881 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" podUID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" containerName="dnsmasq-dns" containerID="cri-o://3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521" gracePeriod=10 Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.871546 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.979973 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-openstack-edpm-ipam\") pod \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.980071 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-config\") pod \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.980131 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f929j\" (UniqueName: \"kubernetes.io/projected/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-kube-api-access-f929j\") pod \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.980198 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-swift-storage-0\") pod \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.980244 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-svc\") pod \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.980332 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-sb\") pod \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " Feb 03 06:23:10 crc kubenswrapper[4872]: I0203 06:23:10.980422 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-nb\") pod \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\" (UID: \"793ae5ce-8d4d-4d73-9e5e-517021a84e3b\") " Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.000020 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-kube-api-access-f929j" (OuterVolumeSpecName: "kube-api-access-f929j") pod "793ae5ce-8d4d-4d73-9e5e-517021a84e3b" (UID: "793ae5ce-8d4d-4d73-9e5e-517021a84e3b"). InnerVolumeSpecName "kube-api-access-f929j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.010559 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" event={"ID":"793ae5ce-8d4d-4d73-9e5e-517021a84e3b","Type":"ContainerDied","Data":"3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521"} Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.010629 4872 scope.go:117] "RemoveContainer" containerID="3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.010722 4872 generic.go:334] "Generic (PLEG): container finished" podID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" containerID="3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521" exitCode=0 Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.010806 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" event={"ID":"793ae5ce-8d4d-4d73-9e5e-517021a84e3b","Type":"ContainerDied","Data":"61f04230a500420a326e7fd8e55166f2633e2c4d6e59459874cacb393726c0fe"} Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.010837 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-5ch9f" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.076979 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "793ae5ce-8d4d-4d73-9e5e-517021a84e3b" (UID: "793ae5ce-8d4d-4d73-9e5e-517021a84e3b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.082555 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f929j\" (UniqueName: \"kubernetes.io/projected/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-kube-api-access-f929j\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.082579 4872 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.095031 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "793ae5ce-8d4d-4d73-9e5e-517021a84e3b" (UID: "793ae5ce-8d4d-4d73-9e5e-517021a84e3b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.109650 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "793ae5ce-8d4d-4d73-9e5e-517021a84e3b" (UID: "793ae5ce-8d4d-4d73-9e5e-517021a84e3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.136589 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "793ae5ce-8d4d-4d73-9e5e-517021a84e3b" (UID: "793ae5ce-8d4d-4d73-9e5e-517021a84e3b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.139172 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-config" (OuterVolumeSpecName: "config") pod "793ae5ce-8d4d-4d73-9e5e-517021a84e3b" (UID: "793ae5ce-8d4d-4d73-9e5e-517021a84e3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.142304 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "793ae5ce-8d4d-4d73-9e5e-517021a84e3b" (UID: "793ae5ce-8d4d-4d73-9e5e-517021a84e3b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.146365 4872 scope.go:117] "RemoveContainer" containerID="98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.173534 4872 scope.go:117] "RemoveContainer" containerID="3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521" Feb 03 06:23:11 crc kubenswrapper[4872]: E0203 06:23:11.174145 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521\": container with ID starting with 3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521 not found: ID does not exist" containerID="3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.174179 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521"} err="failed to get container status \"3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521\": rpc error: code = NotFound desc = could not find container \"3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521\": container with ID starting with 3773258e661266d2f428dfdcfe99d19188f4e89a99536e7a54bb5c23ab454521 not found: ID does not exist" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.174199 4872 scope.go:117] "RemoveContainer" containerID="98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7" Feb 03 06:23:11 crc kubenswrapper[4872]: E0203 06:23:11.174505 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7\": container with ID starting with 98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7 not found: ID does not exist" containerID="98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.174528 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7"} err="failed to get container status \"98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7\": rpc error: code = NotFound desc = could not find container \"98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7\": container with ID starting with 98e569f11882d4cd2739c344a14393d75337433031d3e42079f6b584af20b1a7 not found: ID does not exist" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.183716 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.183749 4872 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.183761 4872 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-config\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.183770 4872 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.183778 4872 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/793ae5ce-8d4d-4d73-9e5e-517021a84e3b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.340271 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-5ch9f"] Feb 03 06:23:11 crc kubenswrapper[4872]: I0203 06:23:11.350284 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-5ch9f"] Feb 03 06:23:12 crc kubenswrapper[4872]: I0203 06:23:12.131994 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" path="/var/lib/kubelet/pods/793ae5ce-8d4d-4d73-9e5e-517021a84e3b/volumes" Feb 03 06:23:21 crc kubenswrapper[4872]: I0203 06:23:21.146662 4872 generic.go:334] "Generic (PLEG): container finished" podID="0d044870-6de9-4816-b9e2-249371dc40e6" containerID="00644450882fba3db1fac58ccb11617bd945091890881efe5d5cb0a099888019" exitCode=0 Feb 03 06:23:21 crc kubenswrapper[4872]: I0203 06:23:21.146820 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0d044870-6de9-4816-b9e2-249371dc40e6","Type":"ContainerDied","Data":"00644450882fba3db1fac58ccb11617bd945091890881efe5d5cb0a099888019"} Feb 03 06:23:21 crc kubenswrapper[4872]: I0203 06:23:21.175972 4872 generic.go:334] "Generic (PLEG): container finished" podID="ee583ece-3623-4df4-b879-4ab45489bb07" containerID="60e3185ec45c7b06aa46e0bdf20e208abb8c15187579311810a296d1395c2b99" exitCode=0 Feb 03 06:23:21 crc kubenswrapper[4872]: I0203 06:23:21.176201 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee583ece-3623-4df4-b879-4ab45489bb07","Type":"ContainerDied","Data":"60e3185ec45c7b06aa46e0bdf20e208abb8c15187579311810a296d1395c2b99"} Feb 03 06:23:22 crc kubenswrapper[4872]: I0203 06:23:22.185477 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ee583ece-3623-4df4-b879-4ab45489bb07","Type":"ContainerStarted","Data":"8efd04a0783edcfc6cd60feaab13cea1dd2bc9f40b2fca656f10abd013488f57"} Feb 03 06:23:22 crc kubenswrapper[4872]: I0203 06:23:22.186654 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 06:23:22 crc kubenswrapper[4872]: I0203 06:23:22.187442 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0d044870-6de9-4816-b9e2-249371dc40e6","Type":"ContainerStarted","Data":"fedc8ad5db3d6d4b5b46d78669c6b0e78938735750b4f60700ae72cba08cf041"} Feb 03 06:23:22 crc kubenswrapper[4872]: I0203 06:23:22.187750 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:23:22 crc kubenswrapper[4872]: I0203 06:23:22.219045 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.219027156 podStartE2EDuration="37.219027156s" podCreationTimestamp="2026-02-03 06:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:23:22.209831951 +0000 UTC m=+1372.792523365" watchObservedRunningTime="2026-02-03 06:23:22.219027156 +0000 UTC m=+1372.801718570" Feb 03 06:23:22 crc kubenswrapper[4872]: I0203 06:23:22.240809 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.240792962 podStartE2EDuration="37.240792962s" podCreationTimestamp="2026-02-03 06:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:23:22.234240451 +0000 UTC m=+1372.816931885" watchObservedRunningTime="2026-02-03 06:23:22.240792962 +0000 UTC m=+1372.823484376" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.991093 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56"] Feb 03 06:23:33 crc kubenswrapper[4872]: E0203 06:23:33.991858 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" containerName="dnsmasq-dns" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.991870 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" containerName="dnsmasq-dns" Feb 03 06:23:33 crc kubenswrapper[4872]: E0203 06:23:33.991886 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" containerName="init" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.991892 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" containerName="init" Feb 03 06:23:33 crc kubenswrapper[4872]: E0203 06:23:33.991907 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1151951-b500-49fb-99b3-23c375b6f493" containerName="init" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.991913 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1151951-b500-49fb-99b3-23c375b6f493" containerName="init" Feb 03 06:23:33 crc kubenswrapper[4872]: E0203 06:23:33.991936 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1151951-b500-49fb-99b3-23c375b6f493" containerName="dnsmasq-dns" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.991942 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1151951-b500-49fb-99b3-23c375b6f493" containerName="dnsmasq-dns" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.992118 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="793ae5ce-8d4d-4d73-9e5e-517021a84e3b" containerName="dnsmasq-dns" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.992141 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1151951-b500-49fb-99b3-23c375b6f493" containerName="dnsmasq-dns" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.992719 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.994960 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.995130 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.995227 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:23:33 crc kubenswrapper[4872]: I0203 06:23:33.995838 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.074156 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56"] Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.181835 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbxd\" (UniqueName: \"kubernetes.io/projected/861d7992-b778-4bc8-9708-5f94d519db54-kube-api-access-njbxd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.182145 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.182184 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.182240 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.283984 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.284067 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.284235 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njbxd\" (UniqueName: \"kubernetes.io/projected/861d7992-b778-4bc8-9708-5f94d519db54-kube-api-access-njbxd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.284265 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.314820 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.317181 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njbxd\" (UniqueName: \"kubernetes.io/projected/861d7992-b778-4bc8-9708-5f94d519db54-kube-api-access-njbxd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.317446 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.317884 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:34 crc kubenswrapper[4872]: I0203 06:23:34.613280 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:23:36 crc kubenswrapper[4872]: I0203 06:23:36.224026 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="ee583ece-3623-4df4-b879-4ab45489bb07" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.207:5671: connect: connection refused" Feb 03 06:23:36 crc kubenswrapper[4872]: I0203 06:23:36.240147 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0d044870-6de9-4816-b9e2-249371dc40e6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.208:5671: connect: connection refused" Feb 03 06:23:36 crc kubenswrapper[4872]: I0203 06:23:36.284933 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56"] Feb 03 06:23:36 crc kubenswrapper[4872]: I0203 06:23:36.365648 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" event={"ID":"861d7992-b778-4bc8-9708-5f94d519db54","Type":"ContainerStarted","Data":"3f3ca2cb88a85280226e375b3a2cff576ad694b71d963f78f2fa80af444c640a"} Feb 03 06:23:46 crc kubenswrapper[4872]: I0203 06:23:46.225008 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 06:23:46 crc kubenswrapper[4872]: I0203 06:23:46.249907 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 06:23:47 crc kubenswrapper[4872]: I0203 06:23:47.754738 4872 scope.go:117] "RemoveContainer" containerID="c45601ce1502c204820a1135b992d3ab8d19e29c6292b60de91cb0da73254999" Feb 03 06:23:49 crc kubenswrapper[4872]: I0203 06:23:49.806460 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-ttvm7" podUID="088135ef-2437-4cab-b009-302268e318d5" containerName="registry-server" probeResult="failure" output=< Feb 03 06:23:49 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:23:49 crc kubenswrapper[4872]: > Feb 03 06:23:49 crc kubenswrapper[4872]: I0203 06:23:49.806481 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-ttvm7" podUID="088135ef-2437-4cab-b009-302268e318d5" containerName="registry-server" probeResult="failure" output=< Feb 03 06:23:49 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:23:49 crc kubenswrapper[4872]: > Feb 03 06:23:54 crc kubenswrapper[4872]: I0203 06:23:54.320474 4872 scope.go:117] "RemoveContainer" containerID="15a673896b55e960cd90aec59634a964d48c5217b5c5f168b8568f222659596a" Feb 03 06:23:54 crc kubenswrapper[4872]: I0203 06:23:54.541306 4872 scope.go:117] "RemoveContainer" containerID="ddda5ee77227aa4d67440d46c008c156c3ce814a47d57f08409d39a46ba59c08" Feb 03 06:23:54 crc kubenswrapper[4872]: E0203 06:23:54.989068 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/openstack-k8s-operators/openstack-ansibleee-runner:a8680ddaaf035c5c5d952826ecd555e9195cc971" Feb 03 06:23:54 crc kubenswrapper[4872]: E0203 06:23:54.989119 4872 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/openstack-k8s-operators/openstack-ansibleee-runner:a8680ddaaf035c5c5d952826ecd555e9195cc971" Feb 03 06:23:54 crc kubenswrapper[4872]: E0203 06:23:54.989248 4872 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 03 06:23:54 crc kubenswrapper[4872]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.rdoproject.org/openstack-k8s-operators/openstack-ansibleee-runner:a8680ddaaf035c5c5d952826ecd555e9195cc971,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 03 06:23:54 crc kubenswrapper[4872]: - hosts: all Feb 03 06:23:54 crc kubenswrapper[4872]: strategy: linear Feb 03 06:23:54 crc kubenswrapper[4872]: tasks: Feb 03 06:23:54 crc kubenswrapper[4872]: - name: Enable podified-repos Feb 03 06:23:54 crc kubenswrapper[4872]: become: true Feb 03 06:23:54 crc kubenswrapper[4872]: ansible.builtin.shell: | Feb 03 06:23:54 crc kubenswrapper[4872]: set -euxo pipefail Feb 03 06:23:54 crc kubenswrapper[4872]: pushd /var/tmp Feb 03 06:23:54 crc kubenswrapper[4872]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 03 06:23:54 crc kubenswrapper[4872]: pushd repo-setup-main Feb 03 06:23:54 crc kubenswrapper[4872]: python3 -m venv ./venv Feb 03 06:23:54 crc kubenswrapper[4872]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 03 06:23:54 crc kubenswrapper[4872]: ./venv/bin/repo-setup current-podified -b antelope Feb 03 06:23:54 crc kubenswrapper[4872]: popd Feb 03 06:23:54 crc kubenswrapper[4872]: rm -rf repo-setup-main Feb 03 06:23:54 crc kubenswrapper[4872]: Feb 03 06:23:54 crc kubenswrapper[4872]: Feb 03 06:23:54 crc kubenswrapper[4872]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 03 06:23:54 crc kubenswrapper[4872]: edpm_override_hosts: openstack-edpm-ipam Feb 03 06:23:54 crc kubenswrapper[4872]: edpm_service_type: repo-setup Feb 03 06:23:54 crc kubenswrapper[4872]: Feb 03 06:23:54 crc kubenswrapper[4872]: Feb 03 06:23:54 crc kubenswrapper[4872]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njbxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56_openstack(861d7992-b778-4bc8-9708-5f94d519db54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 03 06:23:54 crc kubenswrapper[4872]: > logger="UnhandledError" Feb 03 06:23:54 crc kubenswrapper[4872]: E0203 06:23:54.990443 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" podUID="861d7992-b778-4bc8-9708-5f94d519db54" Feb 03 06:23:55 crc kubenswrapper[4872]: E0203 06:23:55.550820 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/openstack-k8s-operators/openstack-ansibleee-runner:a8680ddaaf035c5c5d952826ecd555e9195cc971\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" podUID="861d7992-b778-4bc8-9708-5f94d519db54" Feb 03 06:24:01 crc kubenswrapper[4872]: I0203 06:24:01.272039 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:24:01 crc kubenswrapper[4872]: I0203 06:24:01.272582 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:24:10 crc kubenswrapper[4872]: I0203 06:24:10.683629 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" event={"ID":"861d7992-b778-4bc8-9708-5f94d519db54","Type":"ContainerStarted","Data":"7eb94584ddf9688c47e557b414a025a387ff0902c752f04d9ee81e39e44e8e2a"} Feb 03 06:24:10 crc kubenswrapper[4872]: I0203 06:24:10.709441 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" podStartSLOduration=3.640189138 podStartE2EDuration="37.70941636s" podCreationTimestamp="2026-02-03 06:23:33 +0000 UTC" firstStartedPulling="2026-02-03 06:23:36.298879754 +0000 UTC m=+1386.881571168" lastFinishedPulling="2026-02-03 06:24:10.368106976 +0000 UTC m=+1420.950798390" observedRunningTime="2026-02-03 06:24:10.700550701 +0000 UTC m=+1421.283242135" watchObservedRunningTime="2026-02-03 06:24:10.70941636 +0000 UTC m=+1421.292107774" Feb 03 06:24:23 crc kubenswrapper[4872]: I0203 06:24:23.825565 4872 generic.go:334] "Generic (PLEG): container finished" podID="861d7992-b778-4bc8-9708-5f94d519db54" containerID="7eb94584ddf9688c47e557b414a025a387ff0902c752f04d9ee81e39e44e8e2a" exitCode=0 Feb 03 06:24:23 crc kubenswrapper[4872]: I0203 06:24:23.825650 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" event={"ID":"861d7992-b778-4bc8-9708-5f94d519db54","Type":"ContainerDied","Data":"7eb94584ddf9688c47e557b414a025a387ff0902c752f04d9ee81e39e44e8e2a"} Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.306434 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.429189 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-ssh-key-openstack-edpm-ipam\") pod \"861d7992-b778-4bc8-9708-5f94d519db54\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.429326 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njbxd\" (UniqueName: \"kubernetes.io/projected/861d7992-b778-4bc8-9708-5f94d519db54-kube-api-access-njbxd\") pod \"861d7992-b778-4bc8-9708-5f94d519db54\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.429477 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-inventory\") pod \"861d7992-b778-4bc8-9708-5f94d519db54\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.429625 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-repo-setup-combined-ca-bundle\") pod \"861d7992-b778-4bc8-9708-5f94d519db54\" (UID: \"861d7992-b778-4bc8-9708-5f94d519db54\") " Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.438241 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "861d7992-b778-4bc8-9708-5f94d519db54" (UID: "861d7992-b778-4bc8-9708-5f94d519db54"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.440053 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861d7992-b778-4bc8-9708-5f94d519db54-kube-api-access-njbxd" (OuterVolumeSpecName: "kube-api-access-njbxd") pod "861d7992-b778-4bc8-9708-5f94d519db54" (UID: "861d7992-b778-4bc8-9708-5f94d519db54"). InnerVolumeSpecName "kube-api-access-njbxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.479865 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-inventory" (OuterVolumeSpecName: "inventory") pod "861d7992-b778-4bc8-9708-5f94d519db54" (UID: "861d7992-b778-4bc8-9708-5f94d519db54"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.492856 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "861d7992-b778-4bc8-9708-5f94d519db54" (UID: "861d7992-b778-4bc8-9708-5f94d519db54"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.531962 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njbxd\" (UniqueName: \"kubernetes.io/projected/861d7992-b778-4bc8-9708-5f94d519db54-kube-api-access-njbxd\") on node \"crc\" DevicePath \"\"" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.532226 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.532238 4872 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.532248 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/861d7992-b778-4bc8-9708-5f94d519db54-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.850680 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" event={"ID":"861d7992-b778-4bc8-9708-5f94d519db54","Type":"ContainerDied","Data":"3f3ca2cb88a85280226e375b3a2cff576ad694b71d963f78f2fa80af444c640a"} Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.850746 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f3ca2cb88a85280226e375b3a2cff576ad694b71d963f78f2fa80af444c640a" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.850784 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.944671 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24"] Feb 03 06:24:25 crc kubenswrapper[4872]: E0203 06:24:25.945150 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861d7992-b778-4bc8-9708-5f94d519db54" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.945212 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="861d7992-b778-4bc8-9708-5f94d519db54" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.945448 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="861d7992-b778-4bc8-9708-5f94d519db54" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.946195 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.948288 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.948611 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.948894 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.949345 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:24:25 crc kubenswrapper[4872]: I0203 06:24:25.972256 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24"] Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.045611 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.045680 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.045869 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xqg\" (UniqueName: \"kubernetes.io/projected/57a842b6-0ca1-471d-aae3-cb4fa4545417-kube-api-access-z2xqg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.147976 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xqg\" (UniqueName: \"kubernetes.io/projected/57a842b6-0ca1-471d-aae3-cb4fa4545417-kube-api-access-z2xqg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.148283 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.148412 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.155518 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.156486 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.167964 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xqg\" (UniqueName: \"kubernetes.io/projected/57a842b6-0ca1-471d-aae3-cb4fa4545417-kube-api-access-z2xqg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t8f24\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.263269 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.817838 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24"] Feb 03 06:24:26 crc kubenswrapper[4872]: I0203 06:24:26.860266 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" event={"ID":"57a842b6-0ca1-471d-aae3-cb4fa4545417","Type":"ContainerStarted","Data":"333f232deda4245b5cd4eb1ce30637afce1922cce1da024524b2a9e5aec84971"} Feb 03 06:24:27 crc kubenswrapper[4872]: I0203 06:24:27.871290 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" event={"ID":"57a842b6-0ca1-471d-aae3-cb4fa4545417","Type":"ContainerStarted","Data":"d438e0463bc6a05cea1b1889c7c256fd6260b24bfb10d5c4abf1b16a8db1db86"} Feb 03 06:24:27 crc kubenswrapper[4872]: I0203 06:24:27.892458 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" podStartSLOduration=2.7029779290000002 podStartE2EDuration="2.892434899s" podCreationTimestamp="2026-02-03 06:24:25 +0000 UTC" firstStartedPulling="2026-02-03 06:24:26.820014935 +0000 UTC m=+1437.402706349" lastFinishedPulling="2026-02-03 06:24:27.009471905 +0000 UTC m=+1437.592163319" observedRunningTime="2026-02-03 06:24:27.887082789 +0000 UTC m=+1438.469774203" watchObservedRunningTime="2026-02-03 06:24:27.892434899 +0000 UTC m=+1438.475126313" Feb 03 06:24:29 crc kubenswrapper[4872]: I0203 06:24:29.893470 4872 generic.go:334] "Generic (PLEG): container finished" podID="57a842b6-0ca1-471d-aae3-cb4fa4545417" containerID="d438e0463bc6a05cea1b1889c7c256fd6260b24bfb10d5c4abf1b16a8db1db86" exitCode=0 Feb 03 06:24:29 crc kubenswrapper[4872]: I0203 06:24:29.893535 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" event={"ID":"57a842b6-0ca1-471d-aae3-cb4fa4545417","Type":"ContainerDied","Data":"d438e0463bc6a05cea1b1889c7c256fd6260b24bfb10d5c4abf1b16a8db1db86"} Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.271275 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.271651 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.287850 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.345212 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-ssh-key-openstack-edpm-ipam\") pod \"57a842b6-0ca1-471d-aae3-cb4fa4545417\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.345257 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-inventory\") pod \"57a842b6-0ca1-471d-aae3-cb4fa4545417\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.345402 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2xqg\" (UniqueName: \"kubernetes.io/projected/57a842b6-0ca1-471d-aae3-cb4fa4545417-kube-api-access-z2xqg\") pod \"57a842b6-0ca1-471d-aae3-cb4fa4545417\" (UID: \"57a842b6-0ca1-471d-aae3-cb4fa4545417\") " Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.351972 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a842b6-0ca1-471d-aae3-cb4fa4545417-kube-api-access-z2xqg" (OuterVolumeSpecName: "kube-api-access-z2xqg") pod "57a842b6-0ca1-471d-aae3-cb4fa4545417" (UID: "57a842b6-0ca1-471d-aae3-cb4fa4545417"). InnerVolumeSpecName "kube-api-access-z2xqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.379719 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-inventory" (OuterVolumeSpecName: "inventory") pod "57a842b6-0ca1-471d-aae3-cb4fa4545417" (UID: "57a842b6-0ca1-471d-aae3-cb4fa4545417"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.383871 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57a842b6-0ca1-471d-aae3-cb4fa4545417" (UID: "57a842b6-0ca1-471d-aae3-cb4fa4545417"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.447869 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.447907 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a842b6-0ca1-471d-aae3-cb4fa4545417-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.447917 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2xqg\" (UniqueName: \"kubernetes.io/projected/57a842b6-0ca1-471d-aae3-cb4fa4545417-kube-api-access-z2xqg\") on node \"crc\" DevicePath \"\"" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.915957 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" event={"ID":"57a842b6-0ca1-471d-aae3-cb4fa4545417","Type":"ContainerDied","Data":"333f232deda4245b5cd4eb1ce30637afce1922cce1da024524b2a9e5aec84971"} Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.916017 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333f232deda4245b5cd4eb1ce30637afce1922cce1da024524b2a9e5aec84971" Feb 03 06:24:31 crc kubenswrapper[4872]: I0203 06:24:31.916035 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t8f24" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.012965 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8"] Feb 03 06:24:32 crc kubenswrapper[4872]: E0203 06:24:32.013415 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a842b6-0ca1-471d-aae3-cb4fa4545417" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.013438 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a842b6-0ca1-471d-aae3-cb4fa4545417" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.013715 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a842b6-0ca1-471d-aae3-cb4fa4545417" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.014552 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.017213 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.017978 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.018197 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.021838 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.031725 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8"] Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.059431 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n7gv\" (UniqueName: \"kubernetes.io/projected/711888ee-ec08-437f-bf74-54ea092796bf-kube-api-access-5n7gv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.059500 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.059557 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.059667 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.161651 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.161774 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n7gv\" (UniqueName: \"kubernetes.io/projected/711888ee-ec08-437f-bf74-54ea092796bf-kube-api-access-5n7gv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.161799 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.161844 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.166920 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.167674 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.168983 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.185266 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n7gv\" (UniqueName: \"kubernetes.io/projected/711888ee-ec08-437f-bf74-54ea092796bf-kube-api-access-5n7gv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.342299 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:24:32 crc kubenswrapper[4872]: I0203 06:24:32.912908 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8"] Feb 03 06:24:33 crc kubenswrapper[4872]: I0203 06:24:33.941043 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" event={"ID":"711888ee-ec08-437f-bf74-54ea092796bf","Type":"ContainerStarted","Data":"87ac4f4e92bd8d8f5886704043aa29ee0281a327c9483a53f4450cecfa83af2f"} Feb 03 06:24:33 crc kubenswrapper[4872]: I0203 06:24:33.942006 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" event={"ID":"711888ee-ec08-437f-bf74-54ea092796bf","Type":"ContainerStarted","Data":"c524cc79f1408c9ebf020426c77acdca3f802268b98c094bf97846f3bf484e94"} Feb 03 06:24:33 crc kubenswrapper[4872]: I0203 06:24:33.968843 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" podStartSLOduration=2.796804139 podStartE2EDuration="2.968817644s" podCreationTimestamp="2026-02-03 06:24:31 +0000 UTC" firstStartedPulling="2026-02-03 06:24:32.937832679 +0000 UTC m=+1443.520524093" lastFinishedPulling="2026-02-03 06:24:33.109846194 +0000 UTC m=+1443.692537598" observedRunningTime="2026-02-03 06:24:33.957581967 +0000 UTC m=+1444.540273391" watchObservedRunningTime="2026-02-03 06:24:33.968817644 +0000 UTC m=+1444.551509058" Feb 03 06:24:55 crc kubenswrapper[4872]: I0203 06:24:55.029408 4872 scope.go:117] "RemoveContainer" containerID="c2f91145d7f0449d154facf6a75e2f36d42a0e100aa9fd88d8f42643eae1e00f" Feb 03 06:25:01 crc kubenswrapper[4872]: I0203 06:25:01.271755 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:25:01 crc kubenswrapper[4872]: I0203 06:25:01.272260 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:25:01 crc kubenswrapper[4872]: I0203 06:25:01.272310 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:25:01 crc kubenswrapper[4872]: I0203 06:25:01.273101 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:25:01 crc kubenswrapper[4872]: I0203 06:25:01.273157 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" gracePeriod=600 Feb 03 06:25:01 crc kubenswrapper[4872]: E0203 06:25:01.412344 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:25:02 crc kubenswrapper[4872]: I0203 06:25:02.267614 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" exitCode=0 Feb 03 06:25:02 crc kubenswrapper[4872]: I0203 06:25:02.267662 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714"} Feb 03 06:25:02 crc kubenswrapper[4872]: I0203 06:25:02.267747 4872 scope.go:117] "RemoveContainer" containerID="3f33fa16560568ffcc087b44ca5f1c955596c896bcd4662e8ab64b8586efed14" Feb 03 06:25:02 crc kubenswrapper[4872]: I0203 06:25:02.268429 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:25:02 crc kubenswrapper[4872]: E0203 06:25:02.268823 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:25:13 crc kubenswrapper[4872]: I0203 06:25:13.123286 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:25:13 crc kubenswrapper[4872]: E0203 06:25:13.124106 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:25:27 crc kubenswrapper[4872]: I0203 06:25:27.123592 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:25:27 crc kubenswrapper[4872]: E0203 06:25:27.124782 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:25:38 crc kubenswrapper[4872]: I0203 06:25:38.123311 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:25:38 crc kubenswrapper[4872]: E0203 06:25:38.124511 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:25:52 crc kubenswrapper[4872]: I0203 06:25:52.123440 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:25:52 crc kubenswrapper[4872]: E0203 06:25:52.124553 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:25:55 crc kubenswrapper[4872]: I0203 06:25:55.118707 4872 scope.go:117] "RemoveContainer" containerID="6ad87165d548c13c101ca2d9191279df4398ffd900336678a80732ca52b40462" Feb 03 06:25:55 crc kubenswrapper[4872]: I0203 06:25:55.173447 4872 scope.go:117] "RemoveContainer" containerID="6be78455df4cde88131ca45af76207cd1e7de9daa4a8ccf672e63fc2b023731a" Feb 03 06:26:03 crc kubenswrapper[4872]: I0203 06:26:03.123042 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:26:03 crc kubenswrapper[4872]: E0203 06:26:03.123806 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.586163 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rb4w2"] Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.592494 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.614642 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb4w2"] Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.728010 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-catalog-content\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.728335 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-utilities\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.728535 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdnxz\" (UniqueName: \"kubernetes.io/projected/6049880c-1151-437a-b44e-0224abd4b163-kube-api-access-tdnxz\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.830599 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-catalog-content\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.830959 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-utilities\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.831040 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdnxz\" (UniqueName: \"kubernetes.io/projected/6049880c-1151-437a-b44e-0224abd4b163-kube-api-access-tdnxz\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.831260 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-catalog-content\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.832106 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-utilities\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.851722 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdnxz\" (UniqueName: \"kubernetes.io/projected/6049880c-1151-437a-b44e-0224abd4b163-kube-api-access-tdnxz\") pod \"redhat-marketplace-rb4w2\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:11 crc kubenswrapper[4872]: I0203 06:26:11.925372 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:12 crc kubenswrapper[4872]: I0203 06:26:12.451113 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb4w2"] Feb 03 06:26:13 crc kubenswrapper[4872]: I0203 06:26:13.126495 4872 generic.go:334] "Generic (PLEG): container finished" podID="6049880c-1151-437a-b44e-0224abd4b163" containerID="9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157" exitCode=0 Feb 03 06:26:13 crc kubenswrapper[4872]: I0203 06:26:13.126574 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb4w2" event={"ID":"6049880c-1151-437a-b44e-0224abd4b163","Type":"ContainerDied","Data":"9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157"} Feb 03 06:26:13 crc kubenswrapper[4872]: I0203 06:26:13.127014 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb4w2" event={"ID":"6049880c-1151-437a-b44e-0224abd4b163","Type":"ContainerStarted","Data":"51895356248e708f5bf6e7ef3962815425c27ead73906a992217319a5a753f42"} Feb 03 06:26:14 crc kubenswrapper[4872]: I0203 06:26:14.123648 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:26:14 crc kubenswrapper[4872]: E0203 06:26:14.124502 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:26:15 crc kubenswrapper[4872]: I0203 06:26:15.149838 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb4w2" event={"ID":"6049880c-1151-437a-b44e-0224abd4b163","Type":"ContainerStarted","Data":"a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7"} Feb 03 06:26:16 crc kubenswrapper[4872]: I0203 06:26:16.160912 4872 generic.go:334] "Generic (PLEG): container finished" podID="6049880c-1151-437a-b44e-0224abd4b163" containerID="a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7" exitCode=0 Feb 03 06:26:16 crc kubenswrapper[4872]: I0203 06:26:16.161240 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb4w2" event={"ID":"6049880c-1151-437a-b44e-0224abd4b163","Type":"ContainerDied","Data":"a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7"} Feb 03 06:26:17 crc kubenswrapper[4872]: I0203 06:26:17.174412 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb4w2" event={"ID":"6049880c-1151-437a-b44e-0224abd4b163","Type":"ContainerStarted","Data":"b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea"} Feb 03 06:26:17 crc kubenswrapper[4872]: I0203 06:26:17.192009 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rb4w2" podStartSLOduration=2.688154135 podStartE2EDuration="6.191989733s" podCreationTimestamp="2026-02-03 06:26:11 +0000 UTC" firstStartedPulling="2026-02-03 06:26:13.128676739 +0000 UTC m=+1543.711368193" lastFinishedPulling="2026-02-03 06:26:16.632512377 +0000 UTC m=+1547.215203791" observedRunningTime="2026-02-03 06:26:17.189504412 +0000 UTC m=+1547.772195836" watchObservedRunningTime="2026-02-03 06:26:17.191989733 +0000 UTC m=+1547.774681147" Feb 03 06:26:21 crc kubenswrapper[4872]: I0203 06:26:21.925598 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:21 crc kubenswrapper[4872]: I0203 06:26:21.926425 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:21 crc kubenswrapper[4872]: I0203 06:26:21.986955 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:22 crc kubenswrapper[4872]: I0203 06:26:22.286818 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:22 crc kubenswrapper[4872]: I0203 06:26:22.362554 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb4w2"] Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.244195 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rb4w2" podUID="6049880c-1151-437a-b44e-0224abd4b163" containerName="registry-server" containerID="cri-o://b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea" gracePeriod=2 Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.669466 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.820307 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-catalog-content\") pod \"6049880c-1151-437a-b44e-0224abd4b163\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.820462 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdnxz\" (UniqueName: \"kubernetes.io/projected/6049880c-1151-437a-b44e-0224abd4b163-kube-api-access-tdnxz\") pod \"6049880c-1151-437a-b44e-0224abd4b163\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.820510 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-utilities\") pod \"6049880c-1151-437a-b44e-0224abd4b163\" (UID: \"6049880c-1151-437a-b44e-0224abd4b163\") " Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.821940 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-utilities" (OuterVolumeSpecName: "utilities") pod "6049880c-1151-437a-b44e-0224abd4b163" (UID: "6049880c-1151-437a-b44e-0224abd4b163"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.831329 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6049880c-1151-437a-b44e-0224abd4b163-kube-api-access-tdnxz" (OuterVolumeSpecName: "kube-api-access-tdnxz") pod "6049880c-1151-437a-b44e-0224abd4b163" (UID: "6049880c-1151-437a-b44e-0224abd4b163"). InnerVolumeSpecName "kube-api-access-tdnxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.841802 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6049880c-1151-437a-b44e-0224abd4b163" (UID: "6049880c-1151-437a-b44e-0224abd4b163"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.923452 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdnxz\" (UniqueName: \"kubernetes.io/projected/6049880c-1151-437a-b44e-0224abd4b163-kube-api-access-tdnxz\") on node \"crc\" DevicePath \"\"" Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.923513 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:26:24 crc kubenswrapper[4872]: I0203 06:26:24.923534 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6049880c-1151-437a-b44e-0224abd4b163-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.262763 4872 generic.go:334] "Generic (PLEG): container finished" podID="6049880c-1151-437a-b44e-0224abd4b163" containerID="b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea" exitCode=0 Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.262819 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb4w2" event={"ID":"6049880c-1151-437a-b44e-0224abd4b163","Type":"ContainerDied","Data":"b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea"} Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.263965 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb4w2" event={"ID":"6049880c-1151-437a-b44e-0224abd4b163","Type":"ContainerDied","Data":"51895356248e708f5bf6e7ef3962815425c27ead73906a992217319a5a753f42"} Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.264106 4872 scope.go:117] "RemoveContainer" containerID="b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.262850 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb4w2" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.309059 4872 scope.go:117] "RemoveContainer" containerID="a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.312105 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb4w2"] Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.328820 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb4w2"] Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.348047 4872 scope.go:117] "RemoveContainer" containerID="9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.387816 4872 scope.go:117] "RemoveContainer" containerID="b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea" Feb 03 06:26:25 crc kubenswrapper[4872]: E0203 06:26:25.388289 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea\": container with ID starting with b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea not found: ID does not exist" containerID="b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.388328 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea"} err="failed to get container status \"b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea\": rpc error: code = NotFound desc = could not find container \"b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea\": container with ID starting with b27b62f30102c540a406b08bb2d1acf00f5a56e2de514b06a19360b73ecb31ea not found: ID does not exist" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.388361 4872 scope.go:117] "RemoveContainer" containerID="a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7" Feb 03 06:26:25 crc kubenswrapper[4872]: E0203 06:26:25.388679 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7\": container with ID starting with a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7 not found: ID does not exist" containerID="a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.388709 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7"} err="failed to get container status \"a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7\": rpc error: code = NotFound desc = could not find container \"a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7\": container with ID starting with a83b738371438f9115f2a62f2a50c938f766a1b6c789627364e2163c9351bad7 not found: ID does not exist" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.388720 4872 scope.go:117] "RemoveContainer" containerID="9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157" Feb 03 06:26:25 crc kubenswrapper[4872]: E0203 06:26:25.388931 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157\": container with ID starting with 9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157 not found: ID does not exist" containerID="9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157" Feb 03 06:26:25 crc kubenswrapper[4872]: I0203 06:26:25.388948 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157"} err="failed to get container status \"9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157\": rpc error: code = NotFound desc = could not find container \"9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157\": container with ID starting with 9305dbd45f0f32993084188a35d716a58f9751b9ea5b25b7acfb4b7094d0b157 not found: ID does not exist" Feb 03 06:26:26 crc kubenswrapper[4872]: I0203 06:26:26.139934 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6049880c-1151-437a-b44e-0224abd4b163" path="/var/lib/kubelet/pods/6049880c-1151-437a-b44e-0224abd4b163/volumes" Feb 03 06:26:27 crc kubenswrapper[4872]: I0203 06:26:27.123494 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:26:27 crc kubenswrapper[4872]: E0203 06:26:27.124440 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.691786 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s9svn"] Feb 03 06:26:37 crc kubenswrapper[4872]: E0203 06:26:37.692772 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6049880c-1151-437a-b44e-0224abd4b163" containerName="extract-content" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.692789 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6049880c-1151-437a-b44e-0224abd4b163" containerName="extract-content" Feb 03 06:26:37 crc kubenswrapper[4872]: E0203 06:26:37.692827 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6049880c-1151-437a-b44e-0224abd4b163" containerName="extract-utilities" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.692836 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6049880c-1151-437a-b44e-0224abd4b163" containerName="extract-utilities" Feb 03 06:26:37 crc kubenswrapper[4872]: E0203 06:26:37.692874 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6049880c-1151-437a-b44e-0224abd4b163" containerName="registry-server" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.692884 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6049880c-1151-437a-b44e-0224abd4b163" containerName="registry-server" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.693122 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="6049880c-1151-437a-b44e-0224abd4b163" containerName="registry-server" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.694974 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.705802 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9svn"] Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.804864 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hsn\" (UniqueName: \"kubernetes.io/projected/eacf2753-ac25-46de-a40a-643cf49de8dd-kube-api-access-s8hsn\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.805426 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-utilities\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.805482 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-catalog-content\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.907482 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8hsn\" (UniqueName: \"kubernetes.io/projected/eacf2753-ac25-46de-a40a-643cf49de8dd-kube-api-access-s8hsn\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.907788 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-utilities\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.907922 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-catalog-content\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.908290 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-utilities\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.908416 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-catalog-content\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:37 crc kubenswrapper[4872]: I0203 06:26:37.944315 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8hsn\" (UniqueName: \"kubernetes.io/projected/eacf2753-ac25-46de-a40a-643cf49de8dd-kube-api-access-s8hsn\") pod \"redhat-operators-s9svn\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:38 crc kubenswrapper[4872]: I0203 06:26:38.016339 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:38 crc kubenswrapper[4872]: I0203 06:26:38.526544 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9svn"] Feb 03 06:26:39 crc kubenswrapper[4872]: I0203 06:26:39.122432 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:26:39 crc kubenswrapper[4872]: E0203 06:26:39.122857 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:26:39 crc kubenswrapper[4872]: I0203 06:26:39.443223 4872 generic.go:334] "Generic (PLEG): container finished" podID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerID="3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325" exitCode=0 Feb 03 06:26:39 crc kubenswrapper[4872]: I0203 06:26:39.443273 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9svn" event={"ID":"eacf2753-ac25-46de-a40a-643cf49de8dd","Type":"ContainerDied","Data":"3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325"} Feb 03 06:26:39 crc kubenswrapper[4872]: I0203 06:26:39.443308 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9svn" event={"ID":"eacf2753-ac25-46de-a40a-643cf49de8dd","Type":"ContainerStarted","Data":"c7d2bc2539755ff4d5ee18137ff37c7832f077d8023f1e3fd6e4436fa06b045b"} Feb 03 06:26:39 crc kubenswrapper[4872]: I0203 06:26:39.446029 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:26:40 crc kubenswrapper[4872]: I0203 06:26:40.455206 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9svn" event={"ID":"eacf2753-ac25-46de-a40a-643cf49de8dd","Type":"ContainerStarted","Data":"18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833"} Feb 03 06:26:45 crc kubenswrapper[4872]: I0203 06:26:45.507628 4872 generic.go:334] "Generic (PLEG): container finished" podID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerID="18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833" exitCode=0 Feb 03 06:26:45 crc kubenswrapper[4872]: I0203 06:26:45.507677 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9svn" event={"ID":"eacf2753-ac25-46de-a40a-643cf49de8dd","Type":"ContainerDied","Data":"18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833"} Feb 03 06:26:46 crc kubenswrapper[4872]: I0203 06:26:46.522645 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9svn" event={"ID":"eacf2753-ac25-46de-a40a-643cf49de8dd","Type":"ContainerStarted","Data":"3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a"} Feb 03 06:26:48 crc kubenswrapper[4872]: I0203 06:26:48.016771 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:48 crc kubenswrapper[4872]: I0203 06:26:48.017016 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:26:49 crc kubenswrapper[4872]: I0203 06:26:49.067427 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s9svn" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="registry-server" probeResult="failure" output=< Feb 03 06:26:49 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:26:49 crc kubenswrapper[4872]: > Feb 03 06:26:52 crc kubenswrapper[4872]: I0203 06:26:52.123392 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:26:52 crc kubenswrapper[4872]: E0203 06:26:52.124075 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:26:55 crc kubenswrapper[4872]: I0203 06:26:55.275036 4872 scope.go:117] "RemoveContainer" containerID="6aa2094699e7a97240fa2afbc6511fca4a04ba50a205d68fa91e0696ded391b3" Feb 03 06:26:55 crc kubenswrapper[4872]: I0203 06:26:55.298682 4872 scope.go:117] "RemoveContainer" containerID="753de1fbcab9eec927071fad5087edb3e06a3a51d11f6af06feb3eabcba20b26" Feb 03 06:26:55 crc kubenswrapper[4872]: I0203 06:26:55.330557 4872 scope.go:117] "RemoveContainer" containerID="25bc4abcd3e8657bb965b10355597470032c487d08ee20c109fcd0e8bf34876f" Feb 03 06:26:55 crc kubenswrapper[4872]: I0203 06:26:55.356936 4872 scope.go:117] "RemoveContainer" containerID="674043476d243ae5a686485afd5c876110657d2e975fae0bc284b321e28ecf9b" Feb 03 06:26:59 crc kubenswrapper[4872]: I0203 06:26:59.082588 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s9svn" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="registry-server" probeResult="failure" output=< Feb 03 06:26:59 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:26:59 crc kubenswrapper[4872]: > Feb 03 06:27:05 crc kubenswrapper[4872]: I0203 06:27:05.122865 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:27:05 crc kubenswrapper[4872]: E0203 06:27:05.123700 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:27:09 crc kubenswrapper[4872]: I0203 06:27:09.070705 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s9svn" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="registry-server" probeResult="failure" output=< Feb 03 06:27:09 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:27:09 crc kubenswrapper[4872]: > Feb 03 06:27:16 crc kubenswrapper[4872]: I0203 06:27:16.123797 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:27:16 crc kubenswrapper[4872]: E0203 06:27:16.124468 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:27:18 crc kubenswrapper[4872]: I0203 06:27:18.060336 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:27:18 crc kubenswrapper[4872]: I0203 06:27:18.094395 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s9svn" podStartSLOduration=34.609134795 podStartE2EDuration="41.094376865s" podCreationTimestamp="2026-02-03 06:26:37 +0000 UTC" firstStartedPulling="2026-02-03 06:26:39.445659916 +0000 UTC m=+1570.028351330" lastFinishedPulling="2026-02-03 06:26:45.930901946 +0000 UTC m=+1576.513593400" observedRunningTime="2026-02-03 06:26:46.542395499 +0000 UTC m=+1577.125086923" watchObservedRunningTime="2026-02-03 06:27:18.094376865 +0000 UTC m=+1608.677068279" Feb 03 06:27:18 crc kubenswrapper[4872]: I0203 06:27:18.112278 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:27:18 crc kubenswrapper[4872]: I0203 06:27:18.301522 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9svn"] Feb 03 06:27:19 crc kubenswrapper[4872]: I0203 06:27:19.842456 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s9svn" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="registry-server" containerID="cri-o://3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a" gracePeriod=2 Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.314767 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.502395 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-catalog-content\") pod \"eacf2753-ac25-46de-a40a-643cf49de8dd\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.502547 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8hsn\" (UniqueName: \"kubernetes.io/projected/eacf2753-ac25-46de-a40a-643cf49de8dd-kube-api-access-s8hsn\") pod \"eacf2753-ac25-46de-a40a-643cf49de8dd\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.502636 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-utilities\") pod \"eacf2753-ac25-46de-a40a-643cf49de8dd\" (UID: \"eacf2753-ac25-46de-a40a-643cf49de8dd\") " Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.504924 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-utilities" (OuterVolumeSpecName: "utilities") pod "eacf2753-ac25-46de-a40a-643cf49de8dd" (UID: "eacf2753-ac25-46de-a40a-643cf49de8dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.510382 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eacf2753-ac25-46de-a40a-643cf49de8dd-kube-api-access-s8hsn" (OuterVolumeSpecName: "kube-api-access-s8hsn") pod "eacf2753-ac25-46de-a40a-643cf49de8dd" (UID: "eacf2753-ac25-46de-a40a-643cf49de8dd"). InnerVolumeSpecName "kube-api-access-s8hsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.606108 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8hsn\" (UniqueName: \"kubernetes.io/projected/eacf2753-ac25-46de-a40a-643cf49de8dd-kube-api-access-s8hsn\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.606468 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.640225 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eacf2753-ac25-46de-a40a-643cf49de8dd" (UID: "eacf2753-ac25-46de-a40a-643cf49de8dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.708489 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eacf2753-ac25-46de-a40a-643cf49de8dd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.853585 4872 generic.go:334] "Generic (PLEG): container finished" podID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerID="3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a" exitCode=0 Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.853629 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9svn" event={"ID":"eacf2753-ac25-46de-a40a-643cf49de8dd","Type":"ContainerDied","Data":"3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a"} Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.854859 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9svn" event={"ID":"eacf2753-ac25-46de-a40a-643cf49de8dd","Type":"ContainerDied","Data":"c7d2bc2539755ff4d5ee18137ff37c7832f077d8023f1e3fd6e4436fa06b045b"} Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.854963 4872 scope.go:117] "RemoveContainer" containerID="3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.853649 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9svn" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.886092 4872 scope.go:117] "RemoveContainer" containerID="18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.911634 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9svn"] Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.917740 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s9svn"] Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.943146 4872 scope.go:117] "RemoveContainer" containerID="3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.969479 4872 scope.go:117] "RemoveContainer" containerID="3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a" Feb 03 06:27:20 crc kubenswrapper[4872]: E0203 06:27:20.969893 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a\": container with ID starting with 3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a not found: ID does not exist" containerID="3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.969922 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a"} err="failed to get container status \"3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a\": rpc error: code = NotFound desc = could not find container \"3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a\": container with ID starting with 3b682f4bad0ae20578bde2803ef41df4356d8d2595202b85e0e95c592039df0a not found: ID does not exist" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.969944 4872 scope.go:117] "RemoveContainer" containerID="18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833" Feb 03 06:27:20 crc kubenswrapper[4872]: E0203 06:27:20.970207 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833\": container with ID starting with 18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833 not found: ID does not exist" containerID="18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.970230 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833"} err="failed to get container status \"18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833\": rpc error: code = NotFound desc = could not find container \"18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833\": container with ID starting with 18bfc72ce660b533ca362c0f179f6d452a4a3e38593b5e057ef7f17297dfe833 not found: ID does not exist" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.970244 4872 scope.go:117] "RemoveContainer" containerID="3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325" Feb 03 06:27:20 crc kubenswrapper[4872]: E0203 06:27:20.970449 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325\": container with ID starting with 3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325 not found: ID does not exist" containerID="3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325" Feb 03 06:27:20 crc kubenswrapper[4872]: I0203 06:27:20.970468 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325"} err="failed to get container status \"3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325\": rpc error: code = NotFound desc = could not find container \"3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325\": container with ID starting with 3b5db9192cb1da3c2427d4553cd2591a6884001ec135da3d6929eb424a54a325 not found: ID does not exist" Feb 03 06:27:22 crc kubenswrapper[4872]: I0203 06:27:22.132037 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" path="/var/lib/kubelet/pods/eacf2753-ac25-46de-a40a-643cf49de8dd/volumes" Feb 03 06:27:27 crc kubenswrapper[4872]: I0203 06:27:27.123785 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:27:27 crc kubenswrapper[4872]: E0203 06:27:27.125952 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.155575 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v27rs"] Feb 03 06:27:30 crc kubenswrapper[4872]: E0203 06:27:30.156525 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="registry-server" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.156547 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="registry-server" Feb 03 06:27:30 crc kubenswrapper[4872]: E0203 06:27:30.156580 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="extract-utilities" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.156592 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="extract-utilities" Feb 03 06:27:30 crc kubenswrapper[4872]: E0203 06:27:30.156636 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="extract-content" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.156648 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="extract-content" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.157141 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="eacf2753-ac25-46de-a40a-643cf49de8dd" containerName="registry-server" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.211478 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.240342 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v27rs"] Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.317155 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-utilities\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.317454 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-catalog-content\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.317613 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mct\" (UniqueName: \"kubernetes.io/projected/4239492e-41f5-4550-ad80-df7c558205a1-kube-api-access-b5mct\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.419352 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-catalog-content\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.419426 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mct\" (UniqueName: \"kubernetes.io/projected/4239492e-41f5-4550-ad80-df7c558205a1-kube-api-access-b5mct\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.419501 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-utilities\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.420019 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-utilities\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.420034 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-catalog-content\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.442575 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mct\" (UniqueName: \"kubernetes.io/projected/4239492e-41f5-4550-ad80-df7c558205a1-kube-api-access-b5mct\") pod \"community-operators-v27rs\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:30 crc kubenswrapper[4872]: I0203 06:27:30.545086 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:31 crc kubenswrapper[4872]: I0203 06:27:31.147760 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v27rs"] Feb 03 06:27:31 crc kubenswrapper[4872]: I0203 06:27:31.986885 4872 generic.go:334] "Generic (PLEG): container finished" podID="4239492e-41f5-4550-ad80-df7c558205a1" containerID="8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43" exitCode=0 Feb 03 06:27:31 crc kubenswrapper[4872]: I0203 06:27:31.986961 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v27rs" event={"ID":"4239492e-41f5-4550-ad80-df7c558205a1","Type":"ContainerDied","Data":"8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43"} Feb 03 06:27:31 crc kubenswrapper[4872]: I0203 06:27:31.987201 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v27rs" event={"ID":"4239492e-41f5-4550-ad80-df7c558205a1","Type":"ContainerStarted","Data":"bdad1b2331303eb4a19cdc67923ff9a7104d189667106704a6b2399e1db4bf55"} Feb 03 06:27:32 crc kubenswrapper[4872]: I0203 06:27:32.070482 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-h7nwv"] Feb 03 06:27:32 crc kubenswrapper[4872]: I0203 06:27:32.078078 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6t9pq"] Feb 03 06:27:32 crc kubenswrapper[4872]: I0203 06:27:32.088134 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-h7nwv"] Feb 03 06:27:32 crc kubenswrapper[4872]: I0203 06:27:32.094888 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6t9pq"] Feb 03 06:27:32 crc kubenswrapper[4872]: I0203 06:27:32.131444 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b88e7c-6a76-4769-b83f-bba7810fa54a" path="/var/lib/kubelet/pods/b0b88e7c-6a76-4769-b83f-bba7810fa54a/volumes" Feb 03 06:27:32 crc kubenswrapper[4872]: I0203 06:27:32.132278 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d288030f-c7b0-415f-a75c-710290cfcd38" path="/var/lib/kubelet/pods/d288030f-c7b0-415f-a75c-710290cfcd38/volumes" Feb 03 06:27:33 crc kubenswrapper[4872]: I0203 06:27:33.038084 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-29e3-account-create-update-dw7r4"] Feb 03 06:27:33 crc kubenswrapper[4872]: I0203 06:27:33.048172 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0620-account-create-update-5xrsx"] Feb 03 06:27:33 crc kubenswrapper[4872]: I0203 06:27:33.058732 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-skqb9"] Feb 03 06:27:33 crc kubenswrapper[4872]: I0203 06:27:33.067967 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-29e3-account-create-update-dw7r4"] Feb 03 06:27:33 crc kubenswrapper[4872]: I0203 06:27:33.077018 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0620-account-create-update-5xrsx"] Feb 03 06:27:33 crc kubenswrapper[4872]: I0203 06:27:33.084850 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-skqb9"] Feb 03 06:27:33 crc kubenswrapper[4872]: I0203 06:27:33.092565 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3446-account-create-update-4x4hf"] Feb 03 06:27:33 crc kubenswrapper[4872]: I0203 06:27:33.099665 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3446-account-create-update-4x4hf"] Feb 03 06:27:34 crc kubenswrapper[4872]: I0203 06:27:34.012275 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v27rs" event={"ID":"4239492e-41f5-4550-ad80-df7c558205a1","Type":"ContainerStarted","Data":"33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401"} Feb 03 06:27:34 crc kubenswrapper[4872]: I0203 06:27:34.135293 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c8616e4-d904-4f7c-9a38-d30207a53cb4" path="/var/lib/kubelet/pods/1c8616e4-d904-4f7c-9a38-d30207a53cb4/volumes" Feb 03 06:27:34 crc kubenswrapper[4872]: I0203 06:27:34.136653 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33115fbf-8226-4f10-8a4d-bb125f811922" path="/var/lib/kubelet/pods/33115fbf-8226-4f10-8a4d-bb125f811922/volumes" Feb 03 06:27:34 crc kubenswrapper[4872]: I0203 06:27:34.138415 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d299733c-5ce6-4a78-b151-9c5e81026c42" path="/var/lib/kubelet/pods/d299733c-5ce6-4a78-b151-9c5e81026c42/volumes" Feb 03 06:27:34 crc kubenswrapper[4872]: I0203 06:27:34.139720 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3feeddc-deae-4d30-a8c4-a8b7dec17966" path="/var/lib/kubelet/pods/e3feeddc-deae-4d30-a8c4-a8b7dec17966/volumes" Feb 03 06:27:35 crc kubenswrapper[4872]: I0203 06:27:35.024810 4872 generic.go:334] "Generic (PLEG): container finished" podID="4239492e-41f5-4550-ad80-df7c558205a1" containerID="33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401" exitCode=0 Feb 03 06:27:35 crc kubenswrapper[4872]: I0203 06:27:35.024857 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v27rs" event={"ID":"4239492e-41f5-4550-ad80-df7c558205a1","Type":"ContainerDied","Data":"33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401"} Feb 03 06:27:36 crc kubenswrapper[4872]: I0203 06:27:36.040063 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v27rs" event={"ID":"4239492e-41f5-4550-ad80-df7c558205a1","Type":"ContainerStarted","Data":"c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84"} Feb 03 06:27:40 crc kubenswrapper[4872]: I0203 06:27:40.545963 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:40 crc kubenswrapper[4872]: I0203 06:27:40.546726 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:40 crc kubenswrapper[4872]: I0203 06:27:40.621391 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:40 crc kubenswrapper[4872]: I0203 06:27:40.645573 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v27rs" podStartSLOduration=6.891861008 podStartE2EDuration="10.645555469s" podCreationTimestamp="2026-02-03 06:27:30 +0000 UTC" firstStartedPulling="2026-02-03 06:27:31.988415696 +0000 UTC m=+1622.571107120" lastFinishedPulling="2026-02-03 06:27:35.742110127 +0000 UTC m=+1626.324801581" observedRunningTime="2026-02-03 06:27:36.072955629 +0000 UTC m=+1626.655647053" watchObservedRunningTime="2026-02-03 06:27:40.645555469 +0000 UTC m=+1631.228246903" Feb 03 06:27:41 crc kubenswrapper[4872]: I0203 06:27:41.150084 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:41 crc kubenswrapper[4872]: I0203 06:27:41.232477 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v27rs"] Feb 03 06:27:42 crc kubenswrapper[4872]: I0203 06:27:42.125219 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:27:42 crc kubenswrapper[4872]: E0203 06:27:42.125819 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.115218 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v27rs" podUID="4239492e-41f5-4550-ad80-df7c558205a1" containerName="registry-server" containerID="cri-o://c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84" gracePeriod=2 Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.565066 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.765941 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5mct\" (UniqueName: \"kubernetes.io/projected/4239492e-41f5-4550-ad80-df7c558205a1-kube-api-access-b5mct\") pod \"4239492e-41f5-4550-ad80-df7c558205a1\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.766293 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-utilities\") pod \"4239492e-41f5-4550-ad80-df7c558205a1\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.766362 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-catalog-content\") pod \"4239492e-41f5-4550-ad80-df7c558205a1\" (UID: \"4239492e-41f5-4550-ad80-df7c558205a1\") " Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.766988 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-utilities" (OuterVolumeSpecName: "utilities") pod "4239492e-41f5-4550-ad80-df7c558205a1" (UID: "4239492e-41f5-4550-ad80-df7c558205a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.775036 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4239492e-41f5-4550-ad80-df7c558205a1-kube-api-access-b5mct" (OuterVolumeSpecName: "kube-api-access-b5mct") pod "4239492e-41f5-4550-ad80-df7c558205a1" (UID: "4239492e-41f5-4550-ad80-df7c558205a1"). InnerVolumeSpecName "kube-api-access-b5mct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.867966 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:43 crc kubenswrapper[4872]: I0203 06:27:43.868137 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5mct\" (UniqueName: \"kubernetes.io/projected/4239492e-41f5-4550-ad80-df7c558205a1-kube-api-access-b5mct\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.128902 4872 generic.go:334] "Generic (PLEG): container finished" podID="4239492e-41f5-4550-ad80-df7c558205a1" containerID="c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84" exitCode=0 Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.128996 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v27rs" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.147700 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v27rs" event={"ID":"4239492e-41f5-4550-ad80-df7c558205a1","Type":"ContainerDied","Data":"c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84"} Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.147750 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v27rs" event={"ID":"4239492e-41f5-4550-ad80-df7c558205a1","Type":"ContainerDied","Data":"bdad1b2331303eb4a19cdc67923ff9a7104d189667106704a6b2399e1db4bf55"} Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.147777 4872 scope.go:117] "RemoveContainer" containerID="c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.170768 4872 scope.go:117] "RemoveContainer" containerID="33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.198092 4872 scope.go:117] "RemoveContainer" containerID="8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.251418 4872 scope.go:117] "RemoveContainer" containerID="c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84" Feb 03 06:27:44 crc kubenswrapper[4872]: E0203 06:27:44.260224 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84\": container with ID starting with c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84 not found: ID does not exist" containerID="c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.260285 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84"} err="failed to get container status \"c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84\": rpc error: code = NotFound desc = could not find container \"c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84\": container with ID starting with c34572ff98e24c7bb1f1ac10f3b2dc1d97f9253a003d4c2a4f036ff5b73edb84 not found: ID does not exist" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.260317 4872 scope.go:117] "RemoveContainer" containerID="33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401" Feb 03 06:27:44 crc kubenswrapper[4872]: E0203 06:27:44.260860 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401\": container with ID starting with 33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401 not found: ID does not exist" containerID="33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.260923 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401"} err="failed to get container status \"33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401\": rpc error: code = NotFound desc = could not find container \"33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401\": container with ID starting with 33f244a320616a4fc81961e7dc2fdc464c4d26949891949f0f7d9d96d8c37401 not found: ID does not exist" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.260945 4872 scope.go:117] "RemoveContainer" containerID="8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43" Feb 03 06:27:44 crc kubenswrapper[4872]: E0203 06:27:44.261345 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43\": container with ID starting with 8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43 not found: ID does not exist" containerID="8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.261376 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43"} err="failed to get container status \"8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43\": rpc error: code = NotFound desc = could not find container \"8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43\": container with ID starting with 8bad499646232706b83940c423235e7129c34b09b04aff321bacc4d5e92b5e43 not found: ID does not exist" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.314961 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4239492e-41f5-4550-ad80-df7c558205a1" (UID: "4239492e-41f5-4550-ad80-df7c558205a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.377062 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4239492e-41f5-4550-ad80-df7c558205a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.468463 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v27rs"] Feb 03 06:27:44 crc kubenswrapper[4872]: I0203 06:27:44.476971 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v27rs"] Feb 03 06:27:46 crc kubenswrapper[4872]: I0203 06:27:46.134227 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4239492e-41f5-4550-ad80-df7c558205a1" path="/var/lib/kubelet/pods/4239492e-41f5-4550-ad80-df7c558205a1/volumes" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.231581 4872 generic.go:334] "Generic (PLEG): container finished" podID="711888ee-ec08-437f-bf74-54ea092796bf" containerID="87ac4f4e92bd8d8f5886704043aa29ee0281a327c9483a53f4450cecfa83af2f" exitCode=0 Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.232308 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" event={"ID":"711888ee-ec08-437f-bf74-54ea092796bf","Type":"ContainerDied","Data":"87ac4f4e92bd8d8f5886704043aa29ee0281a327c9483a53f4450cecfa83af2f"} Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.465399 4872 scope.go:117] "RemoveContainer" containerID="873a680385b43b87ce2cb7f9c22baea0eab6e8cecc32f490547023326e67b2dd" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.524438 4872 scope.go:117] "RemoveContainer" containerID="75a8c2e29b24670928dadd9615ff04ca603b887e07b653c06dfabcadbfdd50f1" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.569018 4872 scope.go:117] "RemoveContainer" containerID="fc7075ba68a08dad63898a1ead4ed1960338df4f69b3cc422563e1e4b594fb36" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.631282 4872 scope.go:117] "RemoveContainer" containerID="36845dcd1f9cf364e47ec09d3035dbbdc5a3906b844c0e1c2b251f3586d3994c" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.655599 4872 scope.go:117] "RemoveContainer" containerID="67b8cb200dbc6895c28eff9022330d5ed2c3cbb0d565dca14fb9363dc000dfda" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.681366 4872 scope.go:117] "RemoveContainer" containerID="cdd5381e46e88564b6b1a5769d34f844eeb83c78856196f17823395c93eb15e2" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.752660 4872 scope.go:117] "RemoveContainer" containerID="39ead5c97adb089a8fed07dfe75a31ee1d802b62b263dce72e715fb884b69da0" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.801900 4872 scope.go:117] "RemoveContainer" containerID="7378c0e4a6ee9b1950497e1f78ec1f774224cb02249d460a872f1db8b4ddd4db" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.844218 4872 scope.go:117] "RemoveContainer" containerID="59d3f490f1a537bcba35db6012608020150bf267a12d5344fcbb8aa621ff837b" Feb 03 06:27:55 crc kubenswrapper[4872]: I0203 06:27:55.897864 4872 scope.go:117] "RemoveContainer" containerID="2ac21265b035f972bdeb9d29e82d31136fb5474a8ddac2da050e60b60a7500a3" Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.052264 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7rrj2"] Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.063096 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7rrj2"] Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.124755 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:27:56 crc kubenswrapper[4872]: E0203 06:27:56.125036 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.132144 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d48c0f3-0353-4e0d-a5af-089233c0ab65" path="/var/lib/kubelet/pods/7d48c0f3-0353-4e0d-a5af-089233c0ab65/volumes" Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.728669 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.921763 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-ssh-key-openstack-edpm-ipam\") pod \"711888ee-ec08-437f-bf74-54ea092796bf\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.922112 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n7gv\" (UniqueName: \"kubernetes.io/projected/711888ee-ec08-437f-bf74-54ea092796bf-kube-api-access-5n7gv\") pod \"711888ee-ec08-437f-bf74-54ea092796bf\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.922156 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-bootstrap-combined-ca-bundle\") pod \"711888ee-ec08-437f-bf74-54ea092796bf\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.922190 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-inventory\") pod \"711888ee-ec08-437f-bf74-54ea092796bf\" (UID: \"711888ee-ec08-437f-bf74-54ea092796bf\") " Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.929723 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "711888ee-ec08-437f-bf74-54ea092796bf" (UID: "711888ee-ec08-437f-bf74-54ea092796bf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.931837 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711888ee-ec08-437f-bf74-54ea092796bf-kube-api-access-5n7gv" (OuterVolumeSpecName: "kube-api-access-5n7gv") pod "711888ee-ec08-437f-bf74-54ea092796bf" (UID: "711888ee-ec08-437f-bf74-54ea092796bf"). InnerVolumeSpecName "kube-api-access-5n7gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.968938 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-inventory" (OuterVolumeSpecName: "inventory") pod "711888ee-ec08-437f-bf74-54ea092796bf" (UID: "711888ee-ec08-437f-bf74-54ea092796bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:27:56 crc kubenswrapper[4872]: I0203 06:27:56.971364 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "711888ee-ec08-437f-bf74-54ea092796bf" (UID: "711888ee-ec08-437f-bf74-54ea092796bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.024427 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.024480 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n7gv\" (UniqueName: \"kubernetes.io/projected/711888ee-ec08-437f-bf74-54ea092796bf-kube-api-access-5n7gv\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.024495 4872 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.024507 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711888ee-ec08-437f-bf74-54ea092796bf-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.288898 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" event={"ID":"711888ee-ec08-437f-bf74-54ea092796bf","Type":"ContainerDied","Data":"c524cc79f1408c9ebf020426c77acdca3f802268b98c094bf97846f3bf484e94"} Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.288974 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c524cc79f1408c9ebf020426c77acdca3f802268b98c094bf97846f3bf484e94" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.288988 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.380529 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7"] Feb 03 06:27:57 crc kubenswrapper[4872]: E0203 06:27:57.380928 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4239492e-41f5-4550-ad80-df7c558205a1" containerName="registry-server" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.380943 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4239492e-41f5-4550-ad80-df7c558205a1" containerName="registry-server" Feb 03 06:27:57 crc kubenswrapper[4872]: E0203 06:27:57.380956 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711888ee-ec08-437f-bf74-54ea092796bf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.380965 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="711888ee-ec08-437f-bf74-54ea092796bf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 06:27:57 crc kubenswrapper[4872]: E0203 06:27:57.380989 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4239492e-41f5-4550-ad80-df7c558205a1" containerName="extract-content" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.380996 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4239492e-41f5-4550-ad80-df7c558205a1" containerName="extract-content" Feb 03 06:27:57 crc kubenswrapper[4872]: E0203 06:27:57.381008 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4239492e-41f5-4550-ad80-df7c558205a1" containerName="extract-utilities" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.381014 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4239492e-41f5-4550-ad80-df7c558205a1" containerName="extract-utilities" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.381187 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="711888ee-ec08-437f-bf74-54ea092796bf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.381205 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="4239492e-41f5-4550-ad80-df7c558205a1" containerName="registry-server" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.381749 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.383852 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.384182 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.385966 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.392035 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.410424 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7"] Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.533397 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759v2\" (UniqueName: \"kubernetes.io/projected/72bf0048-7229-4354-a6e3-1c508f3bacef-kube-api-access-759v2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.533456 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.533555 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.635395 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.635548 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-759v2\" (UniqueName: \"kubernetes.io/projected/72bf0048-7229-4354-a6e3-1c508f3bacef-kube-api-access-759v2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.635575 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.641296 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.646329 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.657450 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-759v2\" (UniqueName: \"kubernetes.io/projected/72bf0048-7229-4354-a6e3-1c508f3bacef-kube-api-access-759v2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:57 crc kubenswrapper[4872]: I0203 06:27:57.699398 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:27:58 crc kubenswrapper[4872]: I0203 06:27:58.056058 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7"] Feb 03 06:27:58 crc kubenswrapper[4872]: I0203 06:27:58.300181 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" event={"ID":"72bf0048-7229-4354-a6e3-1c508f3bacef","Type":"ContainerStarted","Data":"bfe94917444ad04da4464cc28d890a55fc2c526a689949146a73ab2d40cfb8da"} Feb 03 06:27:59 crc kubenswrapper[4872]: I0203 06:27:59.310569 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" event={"ID":"72bf0048-7229-4354-a6e3-1c508f3bacef","Type":"ContainerStarted","Data":"f5e61c2d3bc4b5f7beacf805469974e5ab716585c08995dcfb32a554a450a8fd"} Feb 03 06:27:59 crc kubenswrapper[4872]: I0203 06:27:59.340467 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" podStartSLOduration=1.924887572 podStartE2EDuration="2.340443793s" podCreationTimestamp="2026-02-03 06:27:57 +0000 UTC" firstStartedPulling="2026-02-03 06:27:58.069640756 +0000 UTC m=+1648.652332170" lastFinishedPulling="2026-02-03 06:27:58.485196977 +0000 UTC m=+1649.067888391" observedRunningTime="2026-02-03 06:27:59.332344803 +0000 UTC m=+1649.915036227" watchObservedRunningTime="2026-02-03 06:27:59.340443793 +0000 UTC m=+1649.923135227" Feb 03 06:28:00 crc kubenswrapper[4872]: I0203 06:28:00.039399 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nddft"] Feb 03 06:28:00 crc kubenswrapper[4872]: I0203 06:28:00.050584 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nddft"] Feb 03 06:28:00 crc kubenswrapper[4872]: I0203 06:28:00.144533 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6abead70-db1e-4e93-9e76-214427aa519a" path="/var/lib/kubelet/pods/6abead70-db1e-4e93-9e76-214427aa519a/volumes" Feb 03 06:28:01 crc kubenswrapper[4872]: I0203 06:28:01.033514 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-daec-account-create-update-kb2fr"] Feb 03 06:28:01 crc kubenswrapper[4872]: I0203 06:28:01.049639 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-v7btd"] Feb 03 06:28:01 crc kubenswrapper[4872]: I0203 06:28:01.058518 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dvnrz"] Feb 03 06:28:01 crc kubenswrapper[4872]: I0203 06:28:01.069260 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dvnrz"] Feb 03 06:28:01 crc kubenswrapper[4872]: I0203 06:28:01.080808 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-v7btd"] Feb 03 06:28:01 crc kubenswrapper[4872]: I0203 06:28:01.087661 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-daec-account-create-update-kb2fr"] Feb 03 06:28:02 crc kubenswrapper[4872]: I0203 06:28:02.329602 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf" path="/var/lib/kubelet/pods/6fadbf9c-ac7a-47c5-a6a9-6914129ce7cf/volumes" Feb 03 06:28:02 crc kubenswrapper[4872]: I0203 06:28:02.639315 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897f2bae-39b3-4862-b0a9-d9652b593e98" path="/var/lib/kubelet/pods/897f2bae-39b3-4862-b0a9-d9652b593e98/volumes" Feb 03 06:28:03 crc kubenswrapper[4872]: I0203 06:28:03.029302 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953773a5-8de7-4b75-9e0b-a0effcbb297c" path="/var/lib/kubelet/pods/953773a5-8de7-4b75-9e0b-a0effcbb297c/volumes" Feb 03 06:28:03 crc kubenswrapper[4872]: E0203 06:28:03.233809 4872 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.112s" Feb 03 06:28:05 crc kubenswrapper[4872]: I0203 06:28:05.060560 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-53af-account-create-update-z7v7c"] Feb 03 06:28:05 crc kubenswrapper[4872]: I0203 06:28:05.088863 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-53af-account-create-update-z7v7c"] Feb 03 06:28:05 crc kubenswrapper[4872]: I0203 06:28:05.096594 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a758-account-create-update-cjr6t"] Feb 03 06:28:05 crc kubenswrapper[4872]: I0203 06:28:05.108547 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a758-account-create-update-cjr6t"] Feb 03 06:28:06 crc kubenswrapper[4872]: I0203 06:28:06.140217 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33dd1ede-3c66-4876-8ada-0ed81db2d705" path="/var/lib/kubelet/pods/33dd1ede-3c66-4876-8ada-0ed81db2d705/volumes" Feb 03 06:28:06 crc kubenswrapper[4872]: I0203 06:28:06.144633 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a2c89e-c85a-407d-a759-ac0851d0636f" path="/var/lib/kubelet/pods/b0a2c89e-c85a-407d-a759-ac0851d0636f/volumes" Feb 03 06:28:10 crc kubenswrapper[4872]: I0203 06:28:10.065699 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-767c2"] Feb 03 06:28:10 crc kubenswrapper[4872]: I0203 06:28:10.079459 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-767c2"] Feb 03 06:28:10 crc kubenswrapper[4872]: I0203 06:28:10.142038 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f4db43-1e9b-4fb5-8781-aeb301bc1298" path="/var/lib/kubelet/pods/02f4db43-1e9b-4fb5-8781-aeb301bc1298/volumes" Feb 03 06:28:11 crc kubenswrapper[4872]: I0203 06:28:11.122564 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:28:11 crc kubenswrapper[4872]: E0203 06:28:11.122918 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:28:26 crc kubenswrapper[4872]: I0203 06:28:26.123134 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:28:26 crc kubenswrapper[4872]: E0203 06:28:26.126064 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:28:38 crc kubenswrapper[4872]: I0203 06:28:38.124415 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:28:38 crc kubenswrapper[4872]: E0203 06:28:38.125861 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.047615 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6j26n"] Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.056965 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6j26n"] Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.137011 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a290b0d-8a5a-426e-9561-9372ba41afb5" path="/var/lib/kubelet/pods/5a290b0d-8a5a-426e-9561-9372ba41afb5/volumes" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.150140 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k75pr"] Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.152448 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.168575 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k75pr"] Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.324361 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bpr\" (UniqueName: \"kubernetes.io/projected/9ed2c949-d024-499f-b5e3-6b0560be1876-kube-api-access-k8bpr\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.324427 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-utilities\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.324467 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-catalog-content\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.426560 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8bpr\" (UniqueName: \"kubernetes.io/projected/9ed2c949-d024-499f-b5e3-6b0560be1876-kube-api-access-k8bpr\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.426642 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-utilities\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.426684 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-catalog-content\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.427200 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-utilities\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.427226 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-catalog-content\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.458788 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8bpr\" (UniqueName: \"kubernetes.io/projected/9ed2c949-d024-499f-b5e3-6b0560be1876-kube-api-access-k8bpr\") pod \"certified-operators-k75pr\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:48 crc kubenswrapper[4872]: I0203 06:28:48.542196 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:49 crc kubenswrapper[4872]: I0203 06:28:49.111120 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k75pr"] Feb 03 06:28:49 crc kubenswrapper[4872]: I0203 06:28:49.867298 4872 generic.go:334] "Generic (PLEG): container finished" podID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerID="e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036" exitCode=0 Feb 03 06:28:49 crc kubenswrapper[4872]: I0203 06:28:49.867359 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k75pr" event={"ID":"9ed2c949-d024-499f-b5e3-6b0560be1876","Type":"ContainerDied","Data":"e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036"} Feb 03 06:28:49 crc kubenswrapper[4872]: I0203 06:28:49.867424 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k75pr" event={"ID":"9ed2c949-d024-499f-b5e3-6b0560be1876","Type":"ContainerStarted","Data":"b9e5c6f591a7f113e37eff28b76ef05e2086645774a87c3f75736c366921062d"} Feb 03 06:28:50 crc kubenswrapper[4872]: I0203 06:28:50.139333 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:28:50 crc kubenswrapper[4872]: E0203 06:28:50.140068 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:28:50 crc kubenswrapper[4872]: I0203 06:28:50.879329 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k75pr" event={"ID":"9ed2c949-d024-499f-b5e3-6b0560be1876","Type":"ContainerStarted","Data":"eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4"} Feb 03 06:28:52 crc kubenswrapper[4872]: I0203 06:28:52.921883 4872 generic.go:334] "Generic (PLEG): container finished" podID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerID="eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4" exitCode=0 Feb 03 06:28:52 crc kubenswrapper[4872]: I0203 06:28:52.921986 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k75pr" event={"ID":"9ed2c949-d024-499f-b5e3-6b0560be1876","Type":"ContainerDied","Data":"eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4"} Feb 03 06:28:53 crc kubenswrapper[4872]: I0203 06:28:53.936042 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k75pr" event={"ID":"9ed2c949-d024-499f-b5e3-6b0560be1876","Type":"ContainerStarted","Data":"d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6"} Feb 03 06:28:53 crc kubenswrapper[4872]: I0203 06:28:53.961762 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k75pr" podStartSLOduration=2.432362078 podStartE2EDuration="5.961733377s" podCreationTimestamp="2026-02-03 06:28:48 +0000 UTC" firstStartedPulling="2026-02-03 06:28:49.872095688 +0000 UTC m=+1700.454787102" lastFinishedPulling="2026-02-03 06:28:53.401466977 +0000 UTC m=+1703.984158401" observedRunningTime="2026-02-03 06:28:53.956105309 +0000 UTC m=+1704.538796723" watchObservedRunningTime="2026-02-03 06:28:53.961733377 +0000 UTC m=+1704.544424831" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.302382 4872 scope.go:117] "RemoveContainer" containerID="9cfd4bdac0a9127767f2a93034ca9fbd595b79db4fdad6a038383cdf6d93a465" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.334190 4872 scope.go:117] "RemoveContainer" containerID="636ebcf7d176b4ce5a7a48e93649aeec52ded44c6b0e32e0c5bbffb0567212d5" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.377536 4872 scope.go:117] "RemoveContainer" containerID="d7a504265e555e9ade9f829748d64c7eaef3223ef8757798f1eab69b56e9c378" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.423632 4872 scope.go:117] "RemoveContainer" containerID="f8acfac2bb75cc05862d7541df57c2245ba35a80a74808ba4a40f6f6c0391ba6" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.462538 4872 scope.go:117] "RemoveContainer" containerID="59c38abc9ba32cbc9ab7b9054c74051aa77e09e322dbe8fb028a930171ab21e5" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.502939 4872 scope.go:117] "RemoveContainer" containerID="9139c28bb4be76837aa461dba83b3e1b978a20ba6143d96447f984346aabf17a" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.544284 4872 scope.go:117] "RemoveContainer" containerID="e18fcfe1cfbc6f2f3cd2fdac470a460461c236fab47fe71b3826ab2aef0c995f" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.572273 4872 scope.go:117] "RemoveContainer" containerID="24ed544bc2f8dc45039c6c2f1503b44dfa31ecf46666012fd1bdc7f1737b1fa2" Feb 03 06:28:56 crc kubenswrapper[4872]: I0203 06:28:56.625566 4872 scope.go:117] "RemoveContainer" containerID="c086e84649ad4be8c51596273cc319f77b31ce662e1befb6ec44377821d1f9e2" Feb 03 06:28:58 crc kubenswrapper[4872]: I0203 06:28:58.543276 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:58 crc kubenswrapper[4872]: I0203 06:28:58.544115 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:58 crc kubenswrapper[4872]: I0203 06:28:58.603279 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:59 crc kubenswrapper[4872]: I0203 06:28:59.089968 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:28:59 crc kubenswrapper[4872]: I0203 06:28:59.162838 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k75pr"] Feb 03 06:29:00 crc kubenswrapper[4872]: I0203 06:29:00.042325 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-98455"] Feb 03 06:29:00 crc kubenswrapper[4872]: I0203 06:29:00.053791 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-98455"] Feb 03 06:29:00 crc kubenswrapper[4872]: I0203 06:29:00.136072 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574f0ecd-3f3a-448f-a958-9c606833ad00" path="/var/lib/kubelet/pods/574f0ecd-3f3a-448f-a958-9c606833ad00/volumes" Feb 03 06:29:01 crc kubenswrapper[4872]: I0203 06:29:01.036782 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k75pr" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerName="registry-server" containerID="cri-o://d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6" gracePeriod=2 Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.008803 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.046631 4872 generic.go:334] "Generic (PLEG): container finished" podID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerID="d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6" exitCode=0 Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.046858 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k75pr" event={"ID":"9ed2c949-d024-499f-b5e3-6b0560be1876","Type":"ContainerDied","Data":"d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6"} Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.046972 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k75pr" event={"ID":"9ed2c949-d024-499f-b5e3-6b0560be1876","Type":"ContainerDied","Data":"b9e5c6f591a7f113e37eff28b76ef05e2086645774a87c3f75736c366921062d"} Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.047037 4872 scope.go:117] "RemoveContainer" containerID="d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.047238 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k75pr" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.059938 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8bpr\" (UniqueName: \"kubernetes.io/projected/9ed2c949-d024-499f-b5e3-6b0560be1876-kube-api-access-k8bpr\") pod \"9ed2c949-d024-499f-b5e3-6b0560be1876\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.060153 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-utilities\") pod \"9ed2c949-d024-499f-b5e3-6b0560be1876\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.060225 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-catalog-content\") pod \"9ed2c949-d024-499f-b5e3-6b0560be1876\" (UID: \"9ed2c949-d024-499f-b5e3-6b0560be1876\") " Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.063914 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-utilities" (OuterVolumeSpecName: "utilities") pod "9ed2c949-d024-499f-b5e3-6b0560be1876" (UID: "9ed2c949-d024-499f-b5e3-6b0560be1876"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.068221 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed2c949-d024-499f-b5e3-6b0560be1876-kube-api-access-k8bpr" (OuterVolumeSpecName: "kube-api-access-k8bpr") pod "9ed2c949-d024-499f-b5e3-6b0560be1876" (UID: "9ed2c949-d024-499f-b5e3-6b0560be1876"). InnerVolumeSpecName "kube-api-access-k8bpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.084516 4872 scope.go:117] "RemoveContainer" containerID="eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.112428 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ed2c949-d024-499f-b5e3-6b0560be1876" (UID: "9ed2c949-d024-499f-b5e3-6b0560be1876"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.122608 4872 scope.go:117] "RemoveContainer" containerID="e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.166423 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8bpr\" (UniqueName: \"kubernetes.io/projected/9ed2c949-d024-499f-b5e3-6b0560be1876-kube-api-access-k8bpr\") on node \"crc\" DevicePath \"\"" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.166456 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.166469 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed2c949-d024-499f-b5e3-6b0560be1876-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.172472 4872 scope.go:117] "RemoveContainer" containerID="d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6" Feb 03 06:29:02 crc kubenswrapper[4872]: E0203 06:29:02.173114 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6\": container with ID starting with d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6 not found: ID does not exist" containerID="d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.173153 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6"} err="failed to get container status \"d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6\": rpc error: code = NotFound desc = could not find container \"d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6\": container with ID starting with d6b34ae225528628968ef107664e87c3eaead3bfa614678e478e5697ec21b7a6 not found: ID does not exist" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.173179 4872 scope.go:117] "RemoveContainer" containerID="eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4" Feb 03 06:29:02 crc kubenswrapper[4872]: E0203 06:29:02.173612 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4\": container with ID starting with eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4 not found: ID does not exist" containerID="eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.173727 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4"} err="failed to get container status \"eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4\": rpc error: code = NotFound desc = could not find container \"eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4\": container with ID starting with eb2887e9ff3316f60d758f2682ea0e107257539fb82954aa4b0f331d5d1d08a4 not found: ID does not exist" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.173764 4872 scope.go:117] "RemoveContainer" containerID="e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036" Feb 03 06:29:02 crc kubenswrapper[4872]: E0203 06:29:02.174181 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036\": container with ID starting with e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036 not found: ID does not exist" containerID="e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.174214 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036"} err="failed to get container status \"e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036\": rpc error: code = NotFound desc = could not find container \"e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036\": container with ID starting with e88041059dac760c82473e3c2fd5d599fe545803727b781226aa69ab21a61036 not found: ID does not exist" Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.377726 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k75pr"] Feb 03 06:29:02 crc kubenswrapper[4872]: I0203 06:29:02.384757 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k75pr"] Feb 03 06:29:04 crc kubenswrapper[4872]: I0203 06:29:04.122759 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:29:04 crc kubenswrapper[4872]: E0203 06:29:04.123396 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:29:04 crc kubenswrapper[4872]: I0203 06:29:04.143749 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" path="/var/lib/kubelet/pods/9ed2c949-d024-499f-b5e3-6b0560be1876/volumes" Feb 03 06:29:17 crc kubenswrapper[4872]: I0203 06:29:17.045894 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dbkgp"] Feb 03 06:29:17 crc kubenswrapper[4872]: I0203 06:29:17.057455 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dbkgp"] Feb 03 06:29:17 crc kubenswrapper[4872]: I0203 06:29:17.123026 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:29:17 crc kubenswrapper[4872]: E0203 06:29:17.123352 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:29:18 crc kubenswrapper[4872]: I0203 06:29:18.037740 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cr92l"] Feb 03 06:29:18 crc kubenswrapper[4872]: I0203 06:29:18.046872 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-95dn6"] Feb 03 06:29:18 crc kubenswrapper[4872]: I0203 06:29:18.054530 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-95dn6"] Feb 03 06:29:18 crc kubenswrapper[4872]: I0203 06:29:18.062007 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cr92l"] Feb 03 06:29:18 crc kubenswrapper[4872]: I0203 06:29:18.134160 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13e3e69a-6af1-439d-a9e4-2295a6206492" path="/var/lib/kubelet/pods/13e3e69a-6af1-439d-a9e4-2295a6206492/volumes" Feb 03 06:29:18 crc kubenswrapper[4872]: I0203 06:29:18.135731 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48007ee1-953a-42c7-9279-2f348eb7bffb" path="/var/lib/kubelet/pods/48007ee1-953a-42c7-9279-2f348eb7bffb/volumes" Feb 03 06:29:18 crc kubenswrapper[4872]: I0203 06:29:18.137026 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca50ee1-d592-41c3-869f-480e7d3d02f8" path="/var/lib/kubelet/pods/9ca50ee1-d592-41c3-869f-480e7d3d02f8/volumes" Feb 03 06:29:22 crc kubenswrapper[4872]: I0203 06:29:22.050186 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-m54jj"] Feb 03 06:29:22 crc kubenswrapper[4872]: I0203 06:29:22.075453 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-m54jj"] Feb 03 06:29:22 crc kubenswrapper[4872]: I0203 06:29:22.136963 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d670aec-b637-4fe6-b046-794d9628b49b" path="/var/lib/kubelet/pods/9d670aec-b637-4fe6-b046-794d9628b49b/volumes" Feb 03 06:29:32 crc kubenswrapper[4872]: I0203 06:29:32.125453 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:29:32 crc kubenswrapper[4872]: E0203 06:29:32.126398 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:29:32 crc kubenswrapper[4872]: I0203 06:29:32.338089 4872 generic.go:334] "Generic (PLEG): container finished" podID="72bf0048-7229-4354-a6e3-1c508f3bacef" containerID="f5e61c2d3bc4b5f7beacf805469974e5ab716585c08995dcfb32a554a450a8fd" exitCode=0 Feb 03 06:29:32 crc kubenswrapper[4872]: I0203 06:29:32.338154 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" event={"ID":"72bf0048-7229-4354-a6e3-1c508f3bacef","Type":"ContainerDied","Data":"f5e61c2d3bc4b5f7beacf805469974e5ab716585c08995dcfb32a554a450a8fd"} Feb 03 06:29:33 crc kubenswrapper[4872]: I0203 06:29:33.795192 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:29:33 crc kubenswrapper[4872]: I0203 06:29:33.914310 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-759v2\" (UniqueName: \"kubernetes.io/projected/72bf0048-7229-4354-a6e3-1c508f3bacef-kube-api-access-759v2\") pod \"72bf0048-7229-4354-a6e3-1c508f3bacef\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " Feb 03 06:29:33 crc kubenswrapper[4872]: I0203 06:29:33.914484 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-inventory\") pod \"72bf0048-7229-4354-a6e3-1c508f3bacef\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " Feb 03 06:29:33 crc kubenswrapper[4872]: I0203 06:29:33.914511 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-ssh-key-openstack-edpm-ipam\") pod \"72bf0048-7229-4354-a6e3-1c508f3bacef\" (UID: \"72bf0048-7229-4354-a6e3-1c508f3bacef\") " Feb 03 06:29:33 crc kubenswrapper[4872]: I0203 06:29:33.920729 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72bf0048-7229-4354-a6e3-1c508f3bacef-kube-api-access-759v2" (OuterVolumeSpecName: "kube-api-access-759v2") pod "72bf0048-7229-4354-a6e3-1c508f3bacef" (UID: "72bf0048-7229-4354-a6e3-1c508f3bacef"). InnerVolumeSpecName "kube-api-access-759v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:29:33 crc kubenswrapper[4872]: I0203 06:29:33.945922 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72bf0048-7229-4354-a6e3-1c508f3bacef" (UID: "72bf0048-7229-4354-a6e3-1c508f3bacef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:29:33 crc kubenswrapper[4872]: I0203 06:29:33.965382 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-inventory" (OuterVolumeSpecName: "inventory") pod "72bf0048-7229-4354-a6e3-1c508f3bacef" (UID: "72bf0048-7229-4354-a6e3-1c508f3bacef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.016992 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.017032 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72bf0048-7229-4354-a6e3-1c508f3bacef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.017046 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-759v2\" (UniqueName: \"kubernetes.io/projected/72bf0048-7229-4354-a6e3-1c508f3bacef-kube-api-access-759v2\") on node \"crc\" DevicePath \"\"" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.358190 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" event={"ID":"72bf0048-7229-4354-a6e3-1c508f3bacef","Type":"ContainerDied","Data":"bfe94917444ad04da4464cc28d890a55fc2c526a689949146a73ab2d40cfb8da"} Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.358232 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe94917444ad04da4464cc28d890a55fc2c526a689949146a73ab2d40cfb8da" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.358308 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.465624 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k"] Feb 03 06:29:34 crc kubenswrapper[4872]: E0203 06:29:34.466327 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerName="extract-content" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.466415 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerName="extract-content" Feb 03 06:29:34 crc kubenswrapper[4872]: E0203 06:29:34.466482 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72bf0048-7229-4354-a6e3-1c508f3bacef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.466549 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="72bf0048-7229-4354-a6e3-1c508f3bacef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 06:29:34 crc kubenswrapper[4872]: E0203 06:29:34.466623 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerName="extract-utilities" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.466673 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerName="extract-utilities" Feb 03 06:29:34 crc kubenswrapper[4872]: E0203 06:29:34.466820 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerName="registry-server" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.466905 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerName="registry-server" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.467160 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="72bf0048-7229-4354-a6e3-1c508f3bacef" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.467238 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed2c949-d024-499f-b5e3-6b0560be1876" containerName="registry-server" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.467977 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.471537 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.471869 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.472113 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.473296 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.478630 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k"] Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.627583 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tpf\" (UniqueName: \"kubernetes.io/projected/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-kube-api-access-c2tpf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.627694 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.627745 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.729865 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.730021 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tpf\" (UniqueName: \"kubernetes.io/projected/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-kube-api-access-c2tpf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.730117 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.734586 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.736224 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.747480 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tpf\" (UniqueName: \"kubernetes.io/projected/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-kube-api-access-c2tpf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8927k\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:34 crc kubenswrapper[4872]: I0203 06:29:34.784386 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:29:35 crc kubenswrapper[4872]: I0203 06:29:35.352268 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k"] Feb 03 06:29:35 crc kubenswrapper[4872]: I0203 06:29:35.370354 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" event={"ID":"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2","Type":"ContainerStarted","Data":"c4d9c42b246353e3854c394131ab2f2fda4e85f1c788f65cd9d97eb15393ae48"} Feb 03 06:29:36 crc kubenswrapper[4872]: I0203 06:29:36.381883 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" event={"ID":"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2","Type":"ContainerStarted","Data":"cbec222431c97a0cf08d97bd3f02cae2bacad19f1e4718ac96d6fb5c7835b0f9"} Feb 03 06:29:36 crc kubenswrapper[4872]: I0203 06:29:36.412596 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" podStartSLOduration=2.215645844 podStartE2EDuration="2.412574432s" podCreationTimestamp="2026-02-03 06:29:34 +0000 UTC" firstStartedPulling="2026-02-03 06:29:35.350829857 +0000 UTC m=+1745.933521271" lastFinishedPulling="2026-02-03 06:29:35.547758445 +0000 UTC m=+1746.130449859" observedRunningTime="2026-02-03 06:29:36.400349423 +0000 UTC m=+1746.983040847" watchObservedRunningTime="2026-02-03 06:29:36.412574432 +0000 UTC m=+1746.995265856" Feb 03 06:29:44 crc kubenswrapper[4872]: I0203 06:29:44.123546 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:29:44 crc kubenswrapper[4872]: E0203 06:29:44.124620 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:29:52 crc kubenswrapper[4872]: I0203 06:29:52.041911 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dzr2t"] Feb 03 06:29:52 crc kubenswrapper[4872]: I0203 06:29:52.049761 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-c9xvp"] Feb 03 06:29:52 crc kubenswrapper[4872]: I0203 06:29:52.058002 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dzr2t"] Feb 03 06:29:52 crc kubenswrapper[4872]: I0203 06:29:52.067522 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-c9xvp"] Feb 03 06:29:52 crc kubenswrapper[4872]: I0203 06:29:52.134176 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49628158-8aeb-4512-9585-91db75925666" path="/var/lib/kubelet/pods/49628158-8aeb-4512-9585-91db75925666/volumes" Feb 03 06:29:52 crc kubenswrapper[4872]: I0203 06:29:52.135369 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8ac6be-5bf4-4866-b8c7-073a00d94310" path="/var/lib/kubelet/pods/5f8ac6be-5bf4-4866-b8c7-073a00d94310/volumes" Feb 03 06:29:53 crc kubenswrapper[4872]: I0203 06:29:53.036425 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-669f-account-create-update-rk72d"] Feb 03 06:29:53 crc kubenswrapper[4872]: I0203 06:29:53.045764 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-669f-account-create-update-rk72d"] Feb 03 06:29:53 crc kubenswrapper[4872]: I0203 06:29:53.057851 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89b4-account-create-update-wbhm5"] Feb 03 06:29:53 crc kubenswrapper[4872]: I0203 06:29:53.066771 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-njmck"] Feb 03 06:29:53 crc kubenswrapper[4872]: I0203 06:29:53.092459 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-08b5-account-create-update-mfr7x"] Feb 03 06:29:53 crc kubenswrapper[4872]: I0203 06:29:53.105451 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-89b4-account-create-update-wbhm5"] Feb 03 06:29:53 crc kubenswrapper[4872]: I0203 06:29:53.112855 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-njmck"] Feb 03 06:29:53 crc kubenswrapper[4872]: I0203 06:29:53.120849 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-08b5-account-create-update-mfr7x"] Feb 03 06:29:54 crc kubenswrapper[4872]: I0203 06:29:54.139145 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e43edca-f702-4b9f-b8be-f95bda7b7a1e" path="/var/lib/kubelet/pods/5e43edca-f702-4b9f-b8be-f95bda7b7a1e/volumes" Feb 03 06:29:54 crc kubenswrapper[4872]: I0203 06:29:54.141777 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0da642-81fc-4cf5-9933-210cf0f17ba9" path="/var/lib/kubelet/pods/ac0da642-81fc-4cf5-9933-210cf0f17ba9/volumes" Feb 03 06:29:54 crc kubenswrapper[4872]: I0203 06:29:54.143300 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f" path="/var/lib/kubelet/pods/e0fa54e7-2fe1-4ce1-a4a8-7de79a58b27f/volumes" Feb 03 06:29:54 crc kubenswrapper[4872]: I0203 06:29:54.144586 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff6dd5da-4798-4300-8752-0eafdd05cf40" path="/var/lib/kubelet/pods/ff6dd5da-4798-4300-8752-0eafdd05cf40/volumes" Feb 03 06:29:56 crc kubenswrapper[4872]: I0203 06:29:56.853327 4872 scope.go:117] "RemoveContainer" containerID="30aaa98d06b743c6f429cc642a1dd4f7e0e22515b9004bc1818661d459891112" Feb 03 06:29:56 crc kubenswrapper[4872]: I0203 06:29:56.893028 4872 scope.go:117] "RemoveContainer" containerID="bb76e23e546a526cf389bb883de5831f95449710f17678077ca760843b473a79" Feb 03 06:29:56 crc kubenswrapper[4872]: I0203 06:29:56.943091 4872 scope.go:117] "RemoveContainer" containerID="c3844b84b7b30f14131073b83e1edff28344aa6bdbef9a48f80b5fd33cc726ae" Feb 03 06:29:56 crc kubenswrapper[4872]: I0203 06:29:56.985504 4872 scope.go:117] "RemoveContainer" containerID="e047b8817d33a644cf3b0bcbeda1ba1ab9570751e85a39efff2bfd5b043db152" Feb 03 06:29:57 crc kubenswrapper[4872]: I0203 06:29:57.021198 4872 scope.go:117] "RemoveContainer" containerID="5bf5bc48e219b91be187fb1e5a8a28f87f74da986ff2a591b8e5591bdcd0a68e" Feb 03 06:29:57 crc kubenswrapper[4872]: I0203 06:29:57.080377 4872 scope.go:117] "RemoveContainer" containerID="7a05757675f2b606a0f56fa7b9c83a0b39f270b9553586945b17b42243f2d508" Feb 03 06:29:57 crc kubenswrapper[4872]: I0203 06:29:57.122570 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:29:57 crc kubenswrapper[4872]: E0203 06:29:57.123309 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:29:57 crc kubenswrapper[4872]: I0203 06:29:57.133362 4872 scope.go:117] "RemoveContainer" containerID="2a723d3a6a85c2b1ba0422b2b817e2490cb46df9fc9d206d89e541f7dabc42ba" Feb 03 06:29:57 crc kubenswrapper[4872]: I0203 06:29:57.182268 4872 scope.go:117] "RemoveContainer" containerID="5bf51ac0ecb2ab6a9284994742efdbe55ca56dd83ca85291b53d4d15f1e8268b" Feb 03 06:29:57 crc kubenswrapper[4872]: I0203 06:29:57.212887 4872 scope.go:117] "RemoveContainer" containerID="ded2277f618d56eebb5243ee02a00c08ae5cec0200584aae48f45922633f19ec" Feb 03 06:29:57 crc kubenswrapper[4872]: I0203 06:29:57.240549 4872 scope.go:117] "RemoveContainer" containerID="714203209d705a9a92725bbec2f656d9cc9a833603fdc025736a41f428fc23ca" Feb 03 06:29:57 crc kubenswrapper[4872]: I0203 06:29:57.276739 4872 scope.go:117] "RemoveContainer" containerID="c83d2ff3724bd518c1a1b53e81f3ee27c447946dfae3855a6c7384764017fa72" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.215634 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh"] Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.217884 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.229248 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.229506 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.234132 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh"] Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.272356 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dd6034-4be8-40ee-91c7-479015131095-secret-volume\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.272406 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dd6034-4be8-40ee-91c7-479015131095-config-volume\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.272430 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgzm\" (UniqueName: \"kubernetes.io/projected/c4dd6034-4be8-40ee-91c7-479015131095-kube-api-access-vtgzm\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.374808 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dd6034-4be8-40ee-91c7-479015131095-secret-volume\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.374855 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dd6034-4be8-40ee-91c7-479015131095-config-volume\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.374913 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgzm\" (UniqueName: \"kubernetes.io/projected/c4dd6034-4be8-40ee-91c7-479015131095-kube-api-access-vtgzm\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.376289 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dd6034-4be8-40ee-91c7-479015131095-config-volume\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.381118 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dd6034-4be8-40ee-91c7-479015131095-secret-volume\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.395355 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgzm\" (UniqueName: \"kubernetes.io/projected/c4dd6034-4be8-40ee-91c7-479015131095-kube-api-access-vtgzm\") pod \"collect-profiles-29501670-d55zh\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:00 crc kubenswrapper[4872]: I0203 06:30:00.555191 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:01 crc kubenswrapper[4872]: I0203 06:30:01.033371 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh"] Feb 03 06:30:01 crc kubenswrapper[4872]: I0203 06:30:01.642218 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" event={"ID":"c4dd6034-4be8-40ee-91c7-479015131095","Type":"ContainerStarted","Data":"b630b27f387eaff76bbb1025a9c8f85eaf1464021f648ac627668b36ab4c3ed5"} Feb 03 06:30:01 crc kubenswrapper[4872]: I0203 06:30:01.643768 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" event={"ID":"c4dd6034-4be8-40ee-91c7-479015131095","Type":"ContainerStarted","Data":"3c593ece4dbb88e4da4d9f7ecce45267b9437505d83925eb514964c2e5440b8c"} Feb 03 06:30:01 crc kubenswrapper[4872]: I0203 06:30:01.659110 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" podStartSLOduration=1.6590900880000001 podStartE2EDuration="1.659090088s" podCreationTimestamp="2026-02-03 06:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 06:30:01.658018172 +0000 UTC m=+1772.240709646" watchObservedRunningTime="2026-02-03 06:30:01.659090088 +0000 UTC m=+1772.241781532" Feb 03 06:30:02 crc kubenswrapper[4872]: I0203 06:30:02.650884 4872 generic.go:334] "Generic (PLEG): container finished" podID="c4dd6034-4be8-40ee-91c7-479015131095" containerID="b630b27f387eaff76bbb1025a9c8f85eaf1464021f648ac627668b36ab4c3ed5" exitCode=0 Feb 03 06:30:02 crc kubenswrapper[4872]: I0203 06:30:02.651045 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" event={"ID":"c4dd6034-4be8-40ee-91c7-479015131095","Type":"ContainerDied","Data":"b630b27f387eaff76bbb1025a9c8f85eaf1464021f648ac627668b36ab4c3ed5"} Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.027422 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.145552 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtgzm\" (UniqueName: \"kubernetes.io/projected/c4dd6034-4be8-40ee-91c7-479015131095-kube-api-access-vtgzm\") pod \"c4dd6034-4be8-40ee-91c7-479015131095\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.146102 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dd6034-4be8-40ee-91c7-479015131095-secret-volume\") pod \"c4dd6034-4be8-40ee-91c7-479015131095\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.146217 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dd6034-4be8-40ee-91c7-479015131095-config-volume\") pod \"c4dd6034-4be8-40ee-91c7-479015131095\" (UID: \"c4dd6034-4be8-40ee-91c7-479015131095\") " Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.146900 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4dd6034-4be8-40ee-91c7-479015131095-config-volume" (OuterVolumeSpecName: "config-volume") pod "c4dd6034-4be8-40ee-91c7-479015131095" (UID: "c4dd6034-4be8-40ee-91c7-479015131095"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.152057 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4dd6034-4be8-40ee-91c7-479015131095-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c4dd6034-4be8-40ee-91c7-479015131095" (UID: "c4dd6034-4be8-40ee-91c7-479015131095"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.164594 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4dd6034-4be8-40ee-91c7-479015131095-kube-api-access-vtgzm" (OuterVolumeSpecName: "kube-api-access-vtgzm") pod "c4dd6034-4be8-40ee-91c7-479015131095" (UID: "c4dd6034-4be8-40ee-91c7-479015131095"). InnerVolumeSpecName "kube-api-access-vtgzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.248947 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtgzm\" (UniqueName: \"kubernetes.io/projected/c4dd6034-4be8-40ee-91c7-479015131095-kube-api-access-vtgzm\") on node \"crc\" DevicePath \"\"" Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.249799 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4dd6034-4be8-40ee-91c7-479015131095-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.249831 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4dd6034-4be8-40ee-91c7-479015131095-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.671349 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" event={"ID":"c4dd6034-4be8-40ee-91c7-479015131095","Type":"ContainerDied","Data":"3c593ece4dbb88e4da4d9f7ecce45267b9437505d83925eb514964c2e5440b8c"} Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.671417 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c593ece4dbb88e4da4d9f7ecce45267b9437505d83925eb514964c2e5440b8c" Feb 03 06:30:04 crc kubenswrapper[4872]: I0203 06:30:04.671617 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh" Feb 03 06:30:10 crc kubenswrapper[4872]: I0203 06:30:10.128499 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:30:10 crc kubenswrapper[4872]: I0203 06:30:10.731293 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"b81097029afc1ad6b47c80497badb62e78be48a3a9a40312044d79d9f688e3ba"} Feb 03 06:30:50 crc kubenswrapper[4872]: I0203 06:30:50.142510 4872 generic.go:334] "Generic (PLEG): container finished" podID="4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2" containerID="cbec222431c97a0cf08d97bd3f02cae2bacad19f1e4718ac96d6fb5c7835b0f9" exitCode=0 Feb 03 06:30:50 crc kubenswrapper[4872]: I0203 06:30:50.146191 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" event={"ID":"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2","Type":"ContainerDied","Data":"cbec222431c97a0cf08d97bd3f02cae2bacad19f1e4718ac96d6fb5c7835b0f9"} Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.629026 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.697555 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-ssh-key-openstack-edpm-ipam\") pod \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.698199 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2tpf\" (UniqueName: \"kubernetes.io/projected/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-kube-api-access-c2tpf\") pod \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.698233 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-inventory\") pod \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\" (UID: \"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2\") " Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.712530 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-kube-api-access-c2tpf" (OuterVolumeSpecName: "kube-api-access-c2tpf") pod "4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2" (UID: "4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2"). InnerVolumeSpecName "kube-api-access-c2tpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.733112 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2" (UID: "4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.752729 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-inventory" (OuterVolumeSpecName: "inventory") pod "4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2" (UID: "4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.801199 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.801230 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2tpf\" (UniqueName: \"kubernetes.io/projected/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-kube-api-access-c2tpf\") on node \"crc\" DevicePath \"\"" Feb 03 06:30:51 crc kubenswrapper[4872]: I0203 06:30:51.801240 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.167971 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" event={"ID":"4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2","Type":"ContainerDied","Data":"c4d9c42b246353e3854c394131ab2f2fda4e85f1c788f65cd9d97eb15393ae48"} Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.168030 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4d9c42b246353e3854c394131ab2f2fda4e85f1c788f65cd9d97eb15393ae48" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.168110 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8927k" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.291247 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw"] Feb 03 06:30:52 crc kubenswrapper[4872]: E0203 06:30:52.293153 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.293283 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 06:30:52 crc kubenswrapper[4872]: E0203 06:30:52.293343 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4dd6034-4be8-40ee-91c7-479015131095" containerName="collect-profiles" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.293399 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4dd6034-4be8-40ee-91c7-479015131095" containerName="collect-profiles" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.293607 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4dd6034-4be8-40ee-91c7-479015131095" containerName="collect-profiles" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.293679 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.294315 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.298025 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.298251 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.299065 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.299435 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.356212 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw"] Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.415785 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.415842 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftqd6\" (UniqueName: \"kubernetes.io/projected/13b6c575-0d6a-4cf8-867d-3230bdded4e4-kube-api-access-ftqd6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.415903 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.517699 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.517760 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftqd6\" (UniqueName: \"kubernetes.io/projected/13b6c575-0d6a-4cf8-867d-3230bdded4e4-kube-api-access-ftqd6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.517813 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.522222 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.524266 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.544336 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftqd6\" (UniqueName: \"kubernetes.io/projected/13b6c575-0d6a-4cf8-867d-3230bdded4e4-kube-api-access-ftqd6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fjglw\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.612712 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:30:52 crc kubenswrapper[4872]: I0203 06:30:52.971570 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw"] Feb 03 06:30:53 crc kubenswrapper[4872]: I0203 06:30:53.175386 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" event={"ID":"13b6c575-0d6a-4cf8-867d-3230bdded4e4","Type":"ContainerStarted","Data":"2213c6c8fd0fdedc023658da146e3fa757b7b703b65c02509aac407a3a48541c"} Feb 03 06:30:54 crc kubenswrapper[4872]: I0203 06:30:54.048767 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-z48z4"] Feb 03 06:30:54 crc kubenswrapper[4872]: I0203 06:30:54.053240 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-z48z4"] Feb 03 06:30:54 crc kubenswrapper[4872]: I0203 06:30:54.134354 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd" path="/var/lib/kubelet/pods/5ce9288b-ecb9-4f5d-b901-52ef24f2d4fd/volumes" Feb 03 06:30:54 crc kubenswrapper[4872]: I0203 06:30:54.183903 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" event={"ID":"13b6c575-0d6a-4cf8-867d-3230bdded4e4","Type":"ContainerStarted","Data":"91b2b52f8514357289c8ede108c4c2c8fdec5f884bdfaf7d78d9e957ff55373a"} Feb 03 06:30:57 crc kubenswrapper[4872]: I0203 06:30:57.495982 4872 scope.go:117] "RemoveContainer" containerID="3fbd1bcf2ec29bf66ac488f8c5866d938b3728cc6a13069ffe31a1b16de30844" Feb 03 06:30:59 crc kubenswrapper[4872]: I0203 06:30:59.232562 4872 generic.go:334] "Generic (PLEG): container finished" podID="13b6c575-0d6a-4cf8-867d-3230bdded4e4" containerID="91b2b52f8514357289c8ede108c4c2c8fdec5f884bdfaf7d78d9e957ff55373a" exitCode=0 Feb 03 06:30:59 crc kubenswrapper[4872]: I0203 06:30:59.232788 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" event={"ID":"13b6c575-0d6a-4cf8-867d-3230bdded4e4","Type":"ContainerDied","Data":"91b2b52f8514357289c8ede108c4c2c8fdec5f884bdfaf7d78d9e957ff55373a"} Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.663168 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.793050 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftqd6\" (UniqueName: \"kubernetes.io/projected/13b6c575-0d6a-4cf8-867d-3230bdded4e4-kube-api-access-ftqd6\") pod \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.793303 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-ssh-key-openstack-edpm-ipam\") pod \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.793562 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-inventory\") pod \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\" (UID: \"13b6c575-0d6a-4cf8-867d-3230bdded4e4\") " Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.806881 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b6c575-0d6a-4cf8-867d-3230bdded4e4-kube-api-access-ftqd6" (OuterVolumeSpecName: "kube-api-access-ftqd6") pod "13b6c575-0d6a-4cf8-867d-3230bdded4e4" (UID: "13b6c575-0d6a-4cf8-867d-3230bdded4e4"). InnerVolumeSpecName "kube-api-access-ftqd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.823103 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-inventory" (OuterVolumeSpecName: "inventory") pod "13b6c575-0d6a-4cf8-867d-3230bdded4e4" (UID: "13b6c575-0d6a-4cf8-867d-3230bdded4e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.828363 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "13b6c575-0d6a-4cf8-867d-3230bdded4e4" (UID: "13b6c575-0d6a-4cf8-867d-3230bdded4e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.896648 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.896717 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftqd6\" (UniqueName: \"kubernetes.io/projected/13b6c575-0d6a-4cf8-867d-3230bdded4e4-kube-api-access-ftqd6\") on node \"crc\" DevicePath \"\"" Feb 03 06:31:00 crc kubenswrapper[4872]: I0203 06:31:00.896739 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13b6c575-0d6a-4cf8-867d-3230bdded4e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.263631 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" event={"ID":"13b6c575-0d6a-4cf8-867d-3230bdded4e4","Type":"ContainerDied","Data":"2213c6c8fd0fdedc023658da146e3fa757b7b703b65c02509aac407a3a48541c"} Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.264023 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2213c6c8fd0fdedc023658da146e3fa757b7b703b65c02509aac407a3a48541c" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.263872 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fjglw" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.342378 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv"] Feb 03 06:31:01 crc kubenswrapper[4872]: E0203 06:31:01.342830 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b6c575-0d6a-4cf8-867d-3230bdded4e4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.342850 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b6c575-0d6a-4cf8-867d-3230bdded4e4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.343023 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b6c575-0d6a-4cf8-867d-3230bdded4e4" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.343593 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.345402 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.345429 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.345634 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.345744 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.360963 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv"] Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.410306 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.410370 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.410429 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpqn5\" (UniqueName: \"kubernetes.io/projected/c09861b6-6f3c-496c-a46e-eb7667965fc7-kube-api-access-cpqn5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.511998 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.512055 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.512094 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpqn5\" (UniqueName: \"kubernetes.io/projected/c09861b6-6f3c-496c-a46e-eb7667965fc7-kube-api-access-cpqn5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.516232 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.517681 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.532901 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpqn5\" (UniqueName: \"kubernetes.io/projected/c09861b6-6f3c-496c-a46e-eb7667965fc7-kube-api-access-cpqn5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qckpv\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:01 crc kubenswrapper[4872]: I0203 06:31:01.663893 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:02 crc kubenswrapper[4872]: I0203 06:31:02.266796 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv"] Feb 03 06:31:02 crc kubenswrapper[4872]: I0203 06:31:02.271844 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" event={"ID":"c09861b6-6f3c-496c-a46e-eb7667965fc7","Type":"ContainerStarted","Data":"81fa3269cc022ab481ce6dd99c0ce19b9fae534ef19a2b0e735aa4fba15660bf"} Feb 03 06:31:03 crc kubenswrapper[4872]: I0203 06:31:03.280839 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" event={"ID":"c09861b6-6f3c-496c-a46e-eb7667965fc7","Type":"ContainerStarted","Data":"d0a08947611d3fb085add1461c4c0cc68efa8a986d7d9304f857b91911177b64"} Feb 03 06:31:13 crc kubenswrapper[4872]: I0203 06:31:13.062729 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" podStartSLOduration=11.783207926 podStartE2EDuration="12.062681971s" podCreationTimestamp="2026-02-03 06:31:01 +0000 UTC" firstStartedPulling="2026-02-03 06:31:02.268803793 +0000 UTC m=+1832.851495207" lastFinishedPulling="2026-02-03 06:31:02.548277798 +0000 UTC m=+1833.130969252" observedRunningTime="2026-02-03 06:31:03.313509993 +0000 UTC m=+1833.896201407" watchObservedRunningTime="2026-02-03 06:31:13.062681971 +0000 UTC m=+1843.645373405" Feb 03 06:31:13 crc kubenswrapper[4872]: I0203 06:31:13.076110 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4k9hs"] Feb 03 06:31:13 crc kubenswrapper[4872]: I0203 06:31:13.088558 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4k9hs"] Feb 03 06:31:14 crc kubenswrapper[4872]: I0203 06:31:14.135660 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b823238a-397a-4aba-9788-c9bbe361f24e" path="/var/lib/kubelet/pods/b823238a-397a-4aba-9788-c9bbe361f24e/volumes" Feb 03 06:31:15 crc kubenswrapper[4872]: I0203 06:31:15.043045 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxmjw"] Feb 03 06:31:15 crc kubenswrapper[4872]: I0203 06:31:15.058467 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pxmjw"] Feb 03 06:31:16 crc kubenswrapper[4872]: I0203 06:31:16.140803 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30dc4098-27fd-4b55-a9bc-a66d92186ea6" path="/var/lib/kubelet/pods/30dc4098-27fd-4b55-a9bc-a66d92186ea6/volumes" Feb 03 06:31:42 crc kubenswrapper[4872]: I0203 06:31:42.652334 4872 generic.go:334] "Generic (PLEG): container finished" podID="c09861b6-6f3c-496c-a46e-eb7667965fc7" containerID="d0a08947611d3fb085add1461c4c0cc68efa8a986d7d9304f857b91911177b64" exitCode=0 Feb 03 06:31:42 crc kubenswrapper[4872]: I0203 06:31:42.652444 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" event={"ID":"c09861b6-6f3c-496c-a46e-eb7667965fc7","Type":"ContainerDied","Data":"d0a08947611d3fb085add1461c4c0cc68efa8a986d7d9304f857b91911177b64"} Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.155585 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.198717 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-inventory\") pod \"c09861b6-6f3c-496c-a46e-eb7667965fc7\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.198989 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpqn5\" (UniqueName: \"kubernetes.io/projected/c09861b6-6f3c-496c-a46e-eb7667965fc7-kube-api-access-cpqn5\") pod \"c09861b6-6f3c-496c-a46e-eb7667965fc7\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.199052 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-ssh-key-openstack-edpm-ipam\") pod \"c09861b6-6f3c-496c-a46e-eb7667965fc7\" (UID: \"c09861b6-6f3c-496c-a46e-eb7667965fc7\") " Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.211408 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09861b6-6f3c-496c-a46e-eb7667965fc7-kube-api-access-cpqn5" (OuterVolumeSpecName: "kube-api-access-cpqn5") pod "c09861b6-6f3c-496c-a46e-eb7667965fc7" (UID: "c09861b6-6f3c-496c-a46e-eb7667965fc7"). InnerVolumeSpecName "kube-api-access-cpqn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.227795 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c09861b6-6f3c-496c-a46e-eb7667965fc7" (UID: "c09861b6-6f3c-496c-a46e-eb7667965fc7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.239025 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-inventory" (OuterVolumeSpecName: "inventory") pod "c09861b6-6f3c-496c-a46e-eb7667965fc7" (UID: "c09861b6-6f3c-496c-a46e-eb7667965fc7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.304548 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.304585 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpqn5\" (UniqueName: \"kubernetes.io/projected/c09861b6-6f3c-496c-a46e-eb7667965fc7-kube-api-access-cpqn5\") on node \"crc\" DevicePath \"\"" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.304595 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c09861b6-6f3c-496c-a46e-eb7667965fc7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.674311 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" event={"ID":"c09861b6-6f3c-496c-a46e-eb7667965fc7","Type":"ContainerDied","Data":"81fa3269cc022ab481ce6dd99c0ce19b9fae534ef19a2b0e735aa4fba15660bf"} Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.674362 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81fa3269cc022ab481ce6dd99c0ce19b9fae534ef19a2b0e735aa4fba15660bf" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.674431 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qckpv" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.784224 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld"] Feb 03 06:31:44 crc kubenswrapper[4872]: E0203 06:31:44.784585 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09861b6-6f3c-496c-a46e-eb7667965fc7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.784606 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09861b6-6f3c-496c-a46e-eb7667965fc7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.784865 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09861b6-6f3c-496c-a46e-eb7667965fc7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.785540 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.787619 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.787987 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.788309 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.788756 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.809645 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld"] Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.915192 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.915353 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857x9\" (UniqueName: \"kubernetes.io/projected/34dc0856-28c2-4b86-adb9-0310701b5110-kube-api-access-857x9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:44 crc kubenswrapper[4872]: I0203 06:31:44.915392 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.017285 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-857x9\" (UniqueName: \"kubernetes.io/projected/34dc0856-28c2-4b86-adb9-0310701b5110-kube-api-access-857x9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.017329 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.017428 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.020940 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.021195 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.035287 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-857x9\" (UniqueName: \"kubernetes.io/projected/34dc0856-28c2-4b86-adb9-0310701b5110-kube-api-access-857x9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.108722 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.680468 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld"] Feb 03 06:31:45 crc kubenswrapper[4872]: I0203 06:31:45.698072 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:31:46 crc kubenswrapper[4872]: I0203 06:31:46.692814 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" event={"ID":"34dc0856-28c2-4b86-adb9-0310701b5110","Type":"ContainerStarted","Data":"263fd3816b17653b081279ca95acb79cd7b7d18c0b8916df3ee7483dab5fb626"} Feb 03 06:31:46 crc kubenswrapper[4872]: I0203 06:31:46.693193 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" event={"ID":"34dc0856-28c2-4b86-adb9-0310701b5110","Type":"ContainerStarted","Data":"9e280b314d79c70682380ed7bbe40537a3aeacc84f79e5f599e3371063551dff"} Feb 03 06:31:46 crc kubenswrapper[4872]: I0203 06:31:46.725711 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" podStartSLOduration=2.547014442 podStartE2EDuration="2.725663372s" podCreationTimestamp="2026-02-03 06:31:44 +0000 UTC" firstStartedPulling="2026-02-03 06:31:45.697830097 +0000 UTC m=+1876.280521511" lastFinishedPulling="2026-02-03 06:31:45.876479027 +0000 UTC m=+1876.459170441" observedRunningTime="2026-02-03 06:31:46.718741475 +0000 UTC m=+1877.301432919" watchObservedRunningTime="2026-02-03 06:31:46.725663372 +0000 UTC m=+1877.308354816" Feb 03 06:31:57 crc kubenswrapper[4872]: I0203 06:31:57.618904 4872 scope.go:117] "RemoveContainer" containerID="3bbdf759550d819d5edf05e39c5676c21a3da212c21424f95d80c4f66e7b660c" Feb 03 06:31:57 crc kubenswrapper[4872]: I0203 06:31:57.670176 4872 scope.go:117] "RemoveContainer" containerID="d1af0c5c20e54230f6f2b76d0f914201026f4355122f78801db2becba6dd7275" Feb 03 06:31:58 crc kubenswrapper[4872]: I0203 06:31:58.052930 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vxhcz"] Feb 03 06:31:58 crc kubenswrapper[4872]: I0203 06:31:58.060418 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vxhcz"] Feb 03 06:31:58 crc kubenswrapper[4872]: I0203 06:31:58.132373 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d4d4bd-0dd6-422f-87fd-d379c0110294" path="/var/lib/kubelet/pods/b4d4d4bd-0dd6-422f-87fd-d379c0110294/volumes" Feb 03 06:32:31 crc kubenswrapper[4872]: I0203 06:32:31.270983 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:32:31 crc kubenswrapper[4872]: I0203 06:32:31.271741 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:32:39 crc kubenswrapper[4872]: I0203 06:32:39.162672 4872 generic.go:334] "Generic (PLEG): container finished" podID="34dc0856-28c2-4b86-adb9-0310701b5110" containerID="263fd3816b17653b081279ca95acb79cd7b7d18c0b8916df3ee7483dab5fb626" exitCode=0 Feb 03 06:32:39 crc kubenswrapper[4872]: I0203 06:32:39.163653 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" event={"ID":"34dc0856-28c2-4b86-adb9-0310701b5110","Type":"ContainerDied","Data":"263fd3816b17653b081279ca95acb79cd7b7d18c0b8916df3ee7483dab5fb626"} Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.641937 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.771224 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-857x9\" (UniqueName: \"kubernetes.io/projected/34dc0856-28c2-4b86-adb9-0310701b5110-kube-api-access-857x9\") pod \"34dc0856-28c2-4b86-adb9-0310701b5110\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.771314 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-ssh-key-openstack-edpm-ipam\") pod \"34dc0856-28c2-4b86-adb9-0310701b5110\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.771403 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-inventory\") pod \"34dc0856-28c2-4b86-adb9-0310701b5110\" (UID: \"34dc0856-28c2-4b86-adb9-0310701b5110\") " Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.776097 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34dc0856-28c2-4b86-adb9-0310701b5110-kube-api-access-857x9" (OuterVolumeSpecName: "kube-api-access-857x9") pod "34dc0856-28c2-4b86-adb9-0310701b5110" (UID: "34dc0856-28c2-4b86-adb9-0310701b5110"). InnerVolumeSpecName "kube-api-access-857x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.800974 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34dc0856-28c2-4b86-adb9-0310701b5110" (UID: "34dc0856-28c2-4b86-adb9-0310701b5110"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.821593 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-inventory" (OuterVolumeSpecName: "inventory") pod "34dc0856-28c2-4b86-adb9-0310701b5110" (UID: "34dc0856-28c2-4b86-adb9-0310701b5110"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.873850 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-857x9\" (UniqueName: \"kubernetes.io/projected/34dc0856-28c2-4b86-adb9-0310701b5110-kube-api-access-857x9\") on node \"crc\" DevicePath \"\"" Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.873875 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:32:40 crc kubenswrapper[4872]: I0203 06:32:40.873885 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34dc0856-28c2-4b86-adb9-0310701b5110-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.181281 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" event={"ID":"34dc0856-28c2-4b86-adb9-0310701b5110","Type":"ContainerDied","Data":"9e280b314d79c70682380ed7bbe40537a3aeacc84f79e5f599e3371063551dff"} Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.181324 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e280b314d79c70682380ed7bbe40537a3aeacc84f79e5f599e3371063551dff" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.181376 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.330067 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fvbc6"] Feb 03 06:32:41 crc kubenswrapper[4872]: E0203 06:32:41.331002 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34dc0856-28c2-4b86-adb9-0310701b5110" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.331028 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="34dc0856-28c2-4b86-adb9-0310701b5110" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.331266 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="34dc0856-28c2-4b86-adb9-0310701b5110" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.332045 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.337173 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.337423 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.338975 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.347288 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.359482 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fvbc6"] Feb 03 06:32:41 crc kubenswrapper[4872]: E0203 06:32:41.384199 4872 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34dc0856_28c2_4b86_adb9_0310701b5110.slice\": RecentStats: unable to find data in memory cache]" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.485871 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9t7w\" (UniqueName: \"kubernetes.io/projected/60b9b2c5-7f79-49d1-b215-4de0664b44c0-kube-api-access-x9t7w\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.485965 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.486117 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.587480 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.587663 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9t7w\" (UniqueName: \"kubernetes.io/projected/60b9b2c5-7f79-49d1-b215-4de0664b44c0-kube-api-access-x9t7w\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.587757 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.592298 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.596630 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.619614 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9t7w\" (UniqueName: \"kubernetes.io/projected/60b9b2c5-7f79-49d1-b215-4de0664b44c0-kube-api-access-x9t7w\") pod \"ssh-known-hosts-edpm-deployment-fvbc6\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:41 crc kubenswrapper[4872]: I0203 06:32:41.651993 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:42 crc kubenswrapper[4872]: I0203 06:32:42.229603 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fvbc6"] Feb 03 06:32:43 crc kubenswrapper[4872]: I0203 06:32:43.198500 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" event={"ID":"60b9b2c5-7f79-49d1-b215-4de0664b44c0","Type":"ContainerStarted","Data":"87cf216471dede83b56e71395c6935bbd4526954de74dcb4d1f0cac39583c430"} Feb 03 06:32:43 crc kubenswrapper[4872]: I0203 06:32:43.198879 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" event={"ID":"60b9b2c5-7f79-49d1-b215-4de0664b44c0","Type":"ContainerStarted","Data":"3d8e62b3653054d48cf04840b6a488a340357d09c32aaf25546db550933a6618"} Feb 03 06:32:43 crc kubenswrapper[4872]: I0203 06:32:43.222558 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" podStartSLOduration=1.9413079020000001 podStartE2EDuration="2.222533699s" podCreationTimestamp="2026-02-03 06:32:41 +0000 UTC" firstStartedPulling="2026-02-03 06:32:42.236144783 +0000 UTC m=+1932.818836207" lastFinishedPulling="2026-02-03 06:32:42.51737059 +0000 UTC m=+1933.100062004" observedRunningTime="2026-02-03 06:32:43.21386067 +0000 UTC m=+1933.796552094" watchObservedRunningTime="2026-02-03 06:32:43.222533699 +0000 UTC m=+1933.805225123" Feb 03 06:32:50 crc kubenswrapper[4872]: I0203 06:32:50.283990 4872 generic.go:334] "Generic (PLEG): container finished" podID="60b9b2c5-7f79-49d1-b215-4de0664b44c0" containerID="87cf216471dede83b56e71395c6935bbd4526954de74dcb4d1f0cac39583c430" exitCode=0 Feb 03 06:32:50 crc kubenswrapper[4872]: I0203 06:32:50.284068 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" event={"ID":"60b9b2c5-7f79-49d1-b215-4de0664b44c0","Type":"ContainerDied","Data":"87cf216471dede83b56e71395c6935bbd4526954de74dcb4d1f0cac39583c430"} Feb 03 06:32:51 crc kubenswrapper[4872]: I0203 06:32:51.948776 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.091120 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-ssh-key-openstack-edpm-ipam\") pod \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.091187 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9t7w\" (UniqueName: \"kubernetes.io/projected/60b9b2c5-7f79-49d1-b215-4de0664b44c0-kube-api-access-x9t7w\") pod \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.091277 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-inventory-0\") pod \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\" (UID: \"60b9b2c5-7f79-49d1-b215-4de0664b44c0\") " Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.108162 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b9b2c5-7f79-49d1-b215-4de0664b44c0-kube-api-access-x9t7w" (OuterVolumeSpecName: "kube-api-access-x9t7w") pod "60b9b2c5-7f79-49d1-b215-4de0664b44c0" (UID: "60b9b2c5-7f79-49d1-b215-4de0664b44c0"). InnerVolumeSpecName "kube-api-access-x9t7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.119764 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60b9b2c5-7f79-49d1-b215-4de0664b44c0" (UID: "60b9b2c5-7f79-49d1-b215-4de0664b44c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.137249 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "60b9b2c5-7f79-49d1-b215-4de0664b44c0" (UID: "60b9b2c5-7f79-49d1-b215-4de0664b44c0"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.193799 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.193917 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9t7w\" (UniqueName: \"kubernetes.io/projected/60b9b2c5-7f79-49d1-b215-4de0664b44c0-kube-api-access-x9t7w\") on node \"crc\" DevicePath \"\"" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.193930 4872 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/60b9b2c5-7f79-49d1-b215-4de0664b44c0-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.305633 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" event={"ID":"60b9b2c5-7f79-49d1-b215-4de0664b44c0","Type":"ContainerDied","Data":"3d8e62b3653054d48cf04840b6a488a340357d09c32aaf25546db550933a6618"} Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.305677 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8e62b3653054d48cf04840b6a488a340357d09c32aaf25546db550933a6618" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.305755 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fvbc6" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.415846 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7"] Feb 03 06:32:52 crc kubenswrapper[4872]: E0203 06:32:52.429251 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b9b2c5-7f79-49d1-b215-4de0664b44c0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.429474 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b9b2c5-7f79-49d1-b215-4de0664b44c0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.429807 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b9b2c5-7f79-49d1-b215-4de0664b44c0" containerName="ssh-known-hosts-edpm-deployment" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.430401 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7"] Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.430545 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.434252 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.434567 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.435398 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.437561 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.500129 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27vgn\" (UniqueName: \"kubernetes.io/projected/b78a618f-d5ee-4722-8d29-b142f05127bf-kube-api-access-27vgn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.500208 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.500251 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.601770 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27vgn\" (UniqueName: \"kubernetes.io/projected/b78a618f-d5ee-4722-8d29-b142f05127bf-kube-api-access-27vgn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.601848 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.601884 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.608645 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.613874 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.631981 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27vgn\" (UniqueName: \"kubernetes.io/projected/b78a618f-d5ee-4722-8d29-b142f05127bf-kube-api-access-27vgn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-nj6c7\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:52 crc kubenswrapper[4872]: I0203 06:32:52.764038 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:32:53 crc kubenswrapper[4872]: I0203 06:32:53.394518 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7"] Feb 03 06:32:53 crc kubenswrapper[4872]: W0203 06:32:53.437292 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb78a618f_d5ee_4722_8d29_b142f05127bf.slice/crio-2852cc52202ddc6ca32b02f3b098ed5257bcb349cfa9c00b5b2716382b3bdfcf WatchSource:0}: Error finding container 2852cc52202ddc6ca32b02f3b098ed5257bcb349cfa9c00b5b2716382b3bdfcf: Status 404 returned error can't find the container with id 2852cc52202ddc6ca32b02f3b098ed5257bcb349cfa9c00b5b2716382b3bdfcf Feb 03 06:32:54 crc kubenswrapper[4872]: I0203 06:32:54.322922 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" event={"ID":"b78a618f-d5ee-4722-8d29-b142f05127bf","Type":"ContainerStarted","Data":"03aca36d30d4cde84a9e686c6c324dc405e1d246c2376eecc3a4850eb7e37208"} Feb 03 06:32:54 crc kubenswrapper[4872]: I0203 06:32:54.323279 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" event={"ID":"b78a618f-d5ee-4722-8d29-b142f05127bf","Type":"ContainerStarted","Data":"2852cc52202ddc6ca32b02f3b098ed5257bcb349cfa9c00b5b2716382b3bdfcf"} Feb 03 06:32:54 crc kubenswrapper[4872]: I0203 06:32:54.341634 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" podStartSLOduration=2.073706497 podStartE2EDuration="2.341609105s" podCreationTimestamp="2026-02-03 06:32:52 +0000 UTC" firstStartedPulling="2026-02-03 06:32:53.439679309 +0000 UTC m=+1944.022370723" lastFinishedPulling="2026-02-03 06:32:53.707581917 +0000 UTC m=+1944.290273331" observedRunningTime="2026-02-03 06:32:54.336116851 +0000 UTC m=+1944.918808265" watchObservedRunningTime="2026-02-03 06:32:54.341609105 +0000 UTC m=+1944.924300519" Feb 03 06:32:57 crc kubenswrapper[4872]: I0203 06:32:57.777166 4872 scope.go:117] "RemoveContainer" containerID="d18f02849e401a11832221bbee2d047f86766eaa5c14f6b9aa71eb1fbc608296" Feb 03 06:33:01 crc kubenswrapper[4872]: I0203 06:33:01.271072 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:33:01 crc kubenswrapper[4872]: I0203 06:33:01.271736 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:33:02 crc kubenswrapper[4872]: I0203 06:33:02.420344 4872 generic.go:334] "Generic (PLEG): container finished" podID="b78a618f-d5ee-4722-8d29-b142f05127bf" containerID="03aca36d30d4cde84a9e686c6c324dc405e1d246c2376eecc3a4850eb7e37208" exitCode=0 Feb 03 06:33:02 crc kubenswrapper[4872]: I0203 06:33:02.420448 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" event={"ID":"b78a618f-d5ee-4722-8d29-b142f05127bf","Type":"ContainerDied","Data":"03aca36d30d4cde84a9e686c6c324dc405e1d246c2376eecc3a4850eb7e37208"} Feb 03 06:33:03 crc kubenswrapper[4872]: I0203 06:33:03.843391 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:33:03 crc kubenswrapper[4872]: I0203 06:33:03.926190 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-ssh-key-openstack-edpm-ipam\") pod \"b78a618f-d5ee-4722-8d29-b142f05127bf\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " Feb 03 06:33:03 crc kubenswrapper[4872]: I0203 06:33:03.926405 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-inventory\") pod \"b78a618f-d5ee-4722-8d29-b142f05127bf\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " Feb 03 06:33:03 crc kubenswrapper[4872]: I0203 06:33:03.926490 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27vgn\" (UniqueName: \"kubernetes.io/projected/b78a618f-d5ee-4722-8d29-b142f05127bf-kube-api-access-27vgn\") pod \"b78a618f-d5ee-4722-8d29-b142f05127bf\" (UID: \"b78a618f-d5ee-4722-8d29-b142f05127bf\") " Feb 03 06:33:03 crc kubenswrapper[4872]: I0203 06:33:03.932827 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78a618f-d5ee-4722-8d29-b142f05127bf-kube-api-access-27vgn" (OuterVolumeSpecName: "kube-api-access-27vgn") pod "b78a618f-d5ee-4722-8d29-b142f05127bf" (UID: "b78a618f-d5ee-4722-8d29-b142f05127bf"). InnerVolumeSpecName "kube-api-access-27vgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:33:03 crc kubenswrapper[4872]: I0203 06:33:03.957936 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-inventory" (OuterVolumeSpecName: "inventory") pod "b78a618f-d5ee-4722-8d29-b142f05127bf" (UID: "b78a618f-d5ee-4722-8d29-b142f05127bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:03 crc kubenswrapper[4872]: I0203 06:33:03.970942 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b78a618f-d5ee-4722-8d29-b142f05127bf" (UID: "b78a618f-d5ee-4722-8d29-b142f05127bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.028908 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.029259 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27vgn\" (UniqueName: \"kubernetes.io/projected/b78a618f-d5ee-4722-8d29-b142f05127bf-kube-api-access-27vgn\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.029362 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b78a618f-d5ee-4722-8d29-b142f05127bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.440013 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" event={"ID":"b78a618f-d5ee-4722-8d29-b142f05127bf","Type":"ContainerDied","Data":"2852cc52202ddc6ca32b02f3b098ed5257bcb349cfa9c00b5b2716382b3bdfcf"} Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.440063 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2852cc52202ddc6ca32b02f3b098ed5257bcb349cfa9c00b5b2716382b3bdfcf" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.440552 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-nj6c7" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.520862 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65"] Feb 03 06:33:04 crc kubenswrapper[4872]: E0203 06:33:04.521306 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78a618f-d5ee-4722-8d29-b142f05127bf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.521331 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78a618f-d5ee-4722-8d29-b142f05127bf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.521564 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78a618f-d5ee-4722-8d29-b142f05127bf" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.522354 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.528846 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.529195 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65"] Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.530651 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.530841 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.530961 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.638113 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.638166 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg7bs\" (UniqueName: \"kubernetes.io/projected/e99e0ea4-b0dd-482a-986f-80eed7253030-kube-api-access-pg7bs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.638205 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.740734 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.741486 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg7bs\" (UniqueName: \"kubernetes.io/projected/e99e0ea4-b0dd-482a-986f-80eed7253030-kube-api-access-pg7bs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.741549 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.746562 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.746607 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.763708 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg7bs\" (UniqueName: \"kubernetes.io/projected/e99e0ea4-b0dd-482a-986f-80eed7253030-kube-api-access-pg7bs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:04 crc kubenswrapper[4872]: I0203 06:33:04.838115 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:05 crc kubenswrapper[4872]: I0203 06:33:05.358509 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65"] Feb 03 06:33:05 crc kubenswrapper[4872]: I0203 06:33:05.449315 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" event={"ID":"e99e0ea4-b0dd-482a-986f-80eed7253030","Type":"ContainerStarted","Data":"42137f4e16d14813a51326e6aff41f532d3b731f0dcc202ab28a48df4f68638b"} Feb 03 06:33:06 crc kubenswrapper[4872]: I0203 06:33:06.460354 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" event={"ID":"e99e0ea4-b0dd-482a-986f-80eed7253030","Type":"ContainerStarted","Data":"d933c834f05c1fd237d508314cbc8a2371a65afa7d6f5b967d5983019f2c3e2b"} Feb 03 06:33:06 crc kubenswrapper[4872]: I0203 06:33:06.484509 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" podStartSLOduration=2.277179243 podStartE2EDuration="2.484488602s" podCreationTimestamp="2026-02-03 06:33:04 +0000 UTC" firstStartedPulling="2026-02-03 06:33:05.369004099 +0000 UTC m=+1955.951695513" lastFinishedPulling="2026-02-03 06:33:05.576313458 +0000 UTC m=+1956.159004872" observedRunningTime="2026-02-03 06:33:06.481376366 +0000 UTC m=+1957.064067820" watchObservedRunningTime="2026-02-03 06:33:06.484488602 +0000 UTC m=+1957.067180016" Feb 03 06:33:15 crc kubenswrapper[4872]: I0203 06:33:15.534386 4872 generic.go:334] "Generic (PLEG): container finished" podID="e99e0ea4-b0dd-482a-986f-80eed7253030" containerID="d933c834f05c1fd237d508314cbc8a2371a65afa7d6f5b967d5983019f2c3e2b" exitCode=0 Feb 03 06:33:15 crc kubenswrapper[4872]: I0203 06:33:15.534569 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" event={"ID":"e99e0ea4-b0dd-482a-986f-80eed7253030","Type":"ContainerDied","Data":"d933c834f05c1fd237d508314cbc8a2371a65afa7d6f5b967d5983019f2c3e2b"} Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.096313 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.226817 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-ssh-key-openstack-edpm-ipam\") pod \"e99e0ea4-b0dd-482a-986f-80eed7253030\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.226980 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg7bs\" (UniqueName: \"kubernetes.io/projected/e99e0ea4-b0dd-482a-986f-80eed7253030-kube-api-access-pg7bs\") pod \"e99e0ea4-b0dd-482a-986f-80eed7253030\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.227044 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-inventory\") pod \"e99e0ea4-b0dd-482a-986f-80eed7253030\" (UID: \"e99e0ea4-b0dd-482a-986f-80eed7253030\") " Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.236130 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99e0ea4-b0dd-482a-986f-80eed7253030-kube-api-access-pg7bs" (OuterVolumeSpecName: "kube-api-access-pg7bs") pod "e99e0ea4-b0dd-482a-986f-80eed7253030" (UID: "e99e0ea4-b0dd-482a-986f-80eed7253030"). InnerVolumeSpecName "kube-api-access-pg7bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.263193 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-inventory" (OuterVolumeSpecName: "inventory") pod "e99e0ea4-b0dd-482a-986f-80eed7253030" (UID: "e99e0ea4-b0dd-482a-986f-80eed7253030"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.271124 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e99e0ea4-b0dd-482a-986f-80eed7253030" (UID: "e99e0ea4-b0dd-482a-986f-80eed7253030"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.329656 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.329764 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg7bs\" (UniqueName: \"kubernetes.io/projected/e99e0ea4-b0dd-482a-986f-80eed7253030-kube-api-access-pg7bs\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.329779 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e99e0ea4-b0dd-482a-986f-80eed7253030-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.559258 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" event={"ID":"e99e0ea4-b0dd-482a-986f-80eed7253030","Type":"ContainerDied","Data":"42137f4e16d14813a51326e6aff41f532d3b731f0dcc202ab28a48df4f68638b"} Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.559634 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42137f4e16d14813a51326e6aff41f532d3b731f0dcc202ab28a48df4f68638b" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.559376 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.718187 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh"] Feb 03 06:33:17 crc kubenswrapper[4872]: E0203 06:33:17.718854 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99e0ea4-b0dd-482a-986f-80eed7253030" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.718954 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99e0ea4-b0dd-482a-986f-80eed7253030" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.719254 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e99e0ea4-b0dd-482a-986f-80eed7253030" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.720424 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.734517 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.734924 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.738010 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.738570 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.739452 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.743480 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.746169 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.751823 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.778076 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh"] Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.876642 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.876733 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.876858 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.876930 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.876966 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877016 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877035 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877065 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877093 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2k6\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-kube-api-access-5r2k6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877110 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877129 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877207 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877480 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.877534 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979460 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979511 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979535 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979560 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979582 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979605 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979625 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979643 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979666 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2k6\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-kube-api-access-5r2k6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979701 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979724 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979761 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979806 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.979826 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.986633 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.987502 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.987592 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.988925 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.989018 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.995371 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.995672 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.995938 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.996829 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.996940 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.997355 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.997384 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:17 crc kubenswrapper[4872]: I0203 06:33:17.998094 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:18 crc kubenswrapper[4872]: I0203 06:33:18.000504 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2k6\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-kube-api-access-5r2k6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:18 crc kubenswrapper[4872]: I0203 06:33:18.066602 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:18 crc kubenswrapper[4872]: I0203 06:33:18.589644 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh"] Feb 03 06:33:18 crc kubenswrapper[4872]: W0203 06:33:18.597161 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab6004c7_7d34_42ff_bf95_1358f1abcbf1.slice/crio-fe4683488a33377af4d35d7c2d8c7eeb2d77e0b5b85f29f913babd963c61d9e7 WatchSource:0}: Error finding container fe4683488a33377af4d35d7c2d8c7eeb2d77e0b5b85f29f913babd963c61d9e7: Status 404 returned error can't find the container with id fe4683488a33377af4d35d7c2d8c7eeb2d77e0b5b85f29f913babd963c61d9e7 Feb 03 06:33:19 crc kubenswrapper[4872]: I0203 06:33:19.576389 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" event={"ID":"ab6004c7-7d34-42ff-bf95-1358f1abcbf1","Type":"ContainerStarted","Data":"0aaf0b9981917e4d47c956cf1a1d327cfc83eadf0c29c5d067955f3c4d26357e"} Feb 03 06:33:19 crc kubenswrapper[4872]: I0203 06:33:19.577056 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" event={"ID":"ab6004c7-7d34-42ff-bf95-1358f1abcbf1","Type":"ContainerStarted","Data":"fe4683488a33377af4d35d7c2d8c7eeb2d77e0b5b85f29f913babd963c61d9e7"} Feb 03 06:33:19 crc kubenswrapper[4872]: I0203 06:33:19.602785 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" podStartSLOduration=2.433288964 podStartE2EDuration="2.602760768s" podCreationTimestamp="2026-02-03 06:33:17 +0000 UTC" firstStartedPulling="2026-02-03 06:33:18.598989507 +0000 UTC m=+1969.181680921" lastFinishedPulling="2026-02-03 06:33:18.768461311 +0000 UTC m=+1969.351152725" observedRunningTime="2026-02-03 06:33:19.59423813 +0000 UTC m=+1970.176929584" watchObservedRunningTime="2026-02-03 06:33:19.602760768 +0000 UTC m=+1970.185452192" Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.271064 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.271832 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.271903 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.273039 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b81097029afc1ad6b47c80497badb62e78be48a3a9a40312044d79d9f688e3ba"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.273124 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://b81097029afc1ad6b47c80497badb62e78be48a3a9a40312044d79d9f688e3ba" gracePeriod=600 Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.692038 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="b81097029afc1ad6b47c80497badb62e78be48a3a9a40312044d79d9f688e3ba" exitCode=0 Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.692109 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"b81097029afc1ad6b47c80497badb62e78be48a3a9a40312044d79d9f688e3ba"} Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.692578 4872 scope.go:117] "RemoveContainer" containerID="6c98f459d76374c8607d2142c75457540600be83220190c73ce05e2bbada4714" Feb 03 06:33:31 crc kubenswrapper[4872]: I0203 06:33:31.692456 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988"} Feb 03 06:33:56 crc kubenswrapper[4872]: I0203 06:33:56.923231 4872 generic.go:334] "Generic (PLEG): container finished" podID="ab6004c7-7d34-42ff-bf95-1358f1abcbf1" containerID="0aaf0b9981917e4d47c956cf1a1d327cfc83eadf0c29c5d067955f3c4d26357e" exitCode=0 Feb 03 06:33:56 crc kubenswrapper[4872]: I0203 06:33:56.923314 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" event={"ID":"ab6004c7-7d34-42ff-bf95-1358f1abcbf1","Type":"ContainerDied","Data":"0aaf0b9981917e4d47c956cf1a1d327cfc83eadf0c29c5d067955f3c4d26357e"} Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.315831 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.452590 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-repo-setup-combined-ca-bundle\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.452651 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-bootstrap-combined-ca-bundle\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.452735 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.452764 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.452799 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-nova-combined-ca-bundle\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453635 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ovn-combined-ca-bundle\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453667 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-libvirt-combined-ca-bundle\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453727 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ssh-key-openstack-edpm-ipam\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453746 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-inventory\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453768 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r2k6\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-kube-api-access-5r2k6\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453819 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453863 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-neutron-metadata-combined-ca-bundle\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453931 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.453965 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-telemetry-combined-ca-bundle\") pod \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\" (UID: \"ab6004c7-7d34-42ff-bf95-1358f1abcbf1\") " Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.459497 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.460012 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.461931 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.462463 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.462616 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.463794 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.464184 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-kube-api-access-5r2k6" (OuterVolumeSpecName: "kube-api-access-5r2k6") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "kube-api-access-5r2k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.466140 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.466235 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.466722 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.467889 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.483974 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.497430 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-inventory" (OuterVolumeSpecName: "inventory") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.506787 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab6004c7-7d34-42ff-bf95-1358f1abcbf1" (UID: "ab6004c7-7d34-42ff-bf95-1358f1abcbf1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556162 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556204 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556219 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r2k6\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-kube-api-access-5r2k6\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556232 4872 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556247 4872 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556262 4872 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556276 4872 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556289 4872 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556301 4872 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556313 4872 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556327 4872 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556341 4872 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556354 4872 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.556365 4872 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab6004c7-7d34-42ff-bf95-1358f1abcbf1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.963390 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" event={"ID":"ab6004c7-7d34-42ff-bf95-1358f1abcbf1","Type":"ContainerDied","Data":"fe4683488a33377af4d35d7c2d8c7eeb2d77e0b5b85f29f913babd963c61d9e7"} Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.964180 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4683488a33377af4d35d7c2d8c7eeb2d77e0b5b85f29f913babd963c61d9e7" Feb 03 06:33:58 crc kubenswrapper[4872]: I0203 06:33:58.963492 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.118603 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm"] Feb 03 06:33:59 crc kubenswrapper[4872]: E0203 06:33:59.119267 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6004c7-7d34-42ff-bf95-1358f1abcbf1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.119300 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6004c7-7d34-42ff-bf95-1358f1abcbf1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.119616 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6004c7-7d34-42ff-bf95-1358f1abcbf1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.120661 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.124990 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.125279 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.125475 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.125868 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.130083 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.132816 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm"] Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.269523 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjpmd\" (UniqueName: \"kubernetes.io/projected/604ba5bf-8ae1-4540-9ec8-366de98da8ba-kube-api-access-vjpmd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.269608 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.269648 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.269674 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.269726 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.371525 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjpmd\" (UniqueName: \"kubernetes.io/projected/604ba5bf-8ae1-4540-9ec8-366de98da8ba-kube-api-access-vjpmd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.371597 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.371637 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.371657 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.371702 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.372809 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.376602 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.380966 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.381585 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.399435 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjpmd\" (UniqueName: \"kubernetes.io/projected/604ba5bf-8ae1-4540-9ec8-366de98da8ba-kube-api-access-vjpmd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9vpmm\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:33:59 crc kubenswrapper[4872]: I0203 06:33:59.439555 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:34:00 crc kubenswrapper[4872]: I0203 06:34:00.048706 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm"] Feb 03 06:34:00 crc kubenswrapper[4872]: I0203 06:34:00.982989 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" event={"ID":"604ba5bf-8ae1-4540-9ec8-366de98da8ba","Type":"ContainerStarted","Data":"09050ce6d6245cd44f1d496f356da0c17de326638f9281c7bac8e5b4260682f2"} Feb 03 06:34:00 crc kubenswrapper[4872]: I0203 06:34:00.983575 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" event={"ID":"604ba5bf-8ae1-4540-9ec8-366de98da8ba","Type":"ContainerStarted","Data":"d88fad21e3f5ef9f242336e1303ef48a8342450adb87ab90dc98fd7241329cde"} Feb 03 06:35:04 crc kubenswrapper[4872]: I0203 06:35:04.683488 4872 generic.go:334] "Generic (PLEG): container finished" podID="604ba5bf-8ae1-4540-9ec8-366de98da8ba" containerID="09050ce6d6245cd44f1d496f356da0c17de326638f9281c7bac8e5b4260682f2" exitCode=0 Feb 03 06:35:04 crc kubenswrapper[4872]: I0203 06:35:04.683994 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" event={"ID":"604ba5bf-8ae1-4540-9ec8-366de98da8ba","Type":"ContainerDied","Data":"09050ce6d6245cd44f1d496f356da0c17de326638f9281c7bac8e5b4260682f2"} Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.117524 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.316228 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-inventory\") pod \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.316296 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovncontroller-config-0\") pod \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.317311 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjpmd\" (UniqueName: \"kubernetes.io/projected/604ba5bf-8ae1-4540-9ec8-366de98da8ba-kube-api-access-vjpmd\") pod \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.317500 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ssh-key-openstack-edpm-ipam\") pod \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.317585 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovn-combined-ca-bundle\") pod \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\" (UID: \"604ba5bf-8ae1-4540-9ec8-366de98da8ba\") " Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.329953 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604ba5bf-8ae1-4540-9ec8-366de98da8ba-kube-api-access-vjpmd" (OuterVolumeSpecName: "kube-api-access-vjpmd") pod "604ba5bf-8ae1-4540-9ec8-366de98da8ba" (UID: "604ba5bf-8ae1-4540-9ec8-366de98da8ba"). InnerVolumeSpecName "kube-api-access-vjpmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.330097 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "604ba5bf-8ae1-4540-9ec8-366de98da8ba" (UID: "604ba5bf-8ae1-4540-9ec8-366de98da8ba"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.361960 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "604ba5bf-8ae1-4540-9ec8-366de98da8ba" (UID: "604ba5bf-8ae1-4540-9ec8-366de98da8ba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.365985 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-inventory" (OuterVolumeSpecName: "inventory") pod "604ba5bf-8ae1-4540-9ec8-366de98da8ba" (UID: "604ba5bf-8ae1-4540-9ec8-366de98da8ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.399643 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "604ba5bf-8ae1-4540-9ec8-366de98da8ba" (UID: "604ba5bf-8ae1-4540-9ec8-366de98da8ba"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.420603 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.420641 4872 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.420654 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjpmd\" (UniqueName: \"kubernetes.io/projected/604ba5bf-8ae1-4540-9ec8-366de98da8ba-kube-api-access-vjpmd\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.420668 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.420679 4872 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604ba5bf-8ae1-4540-9ec8-366de98da8ba-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.701367 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" event={"ID":"604ba5bf-8ae1-4540-9ec8-366de98da8ba","Type":"ContainerDied","Data":"d88fad21e3f5ef9f242336e1303ef48a8342450adb87ab90dc98fd7241329cde"} Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.701792 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d88fad21e3f5ef9f242336e1303ef48a8342450adb87ab90dc98fd7241329cde" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.701400 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9vpmm" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.823528 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l"] Feb 03 06:35:06 crc kubenswrapper[4872]: E0203 06:35:06.824050 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604ba5bf-8ae1-4540-9ec8-366de98da8ba" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.824076 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="604ba5bf-8ae1-4540-9ec8-366de98da8ba" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.824290 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="604ba5bf-8ae1-4540-9ec8-366de98da8ba" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.827541 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.832877 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.833379 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.833548 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.833753 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.833911 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.835498 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.858612 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l"] Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.929301 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.929718 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65f7n\" (UniqueName: \"kubernetes.io/projected/c7dd671e-752b-42ed-826a-be9e1bbb8d66-kube-api-access-65f7n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.929872 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.930003 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.930224 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:06 crc kubenswrapper[4872]: I0203 06:35:06.930306 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.031562 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.031614 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.031659 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.031711 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65f7n\" (UniqueName: \"kubernetes.io/projected/c7dd671e-752b-42ed-826a-be9e1bbb8d66-kube-api-access-65f7n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.031745 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.031777 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.035678 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.036020 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.036760 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.037639 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.041480 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.049917 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65f7n\" (UniqueName: \"kubernetes.io/projected/c7dd671e-752b-42ed-826a-be9e1bbb8d66-kube-api-access-65f7n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.149189 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:07 crc kubenswrapper[4872]: I0203 06:35:07.703545 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l"] Feb 03 06:35:08 crc kubenswrapper[4872]: I0203 06:35:08.723382 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" event={"ID":"c7dd671e-752b-42ed-826a-be9e1bbb8d66","Type":"ContainerStarted","Data":"8005196b64506664453a422914dff8aaef9e8cb405ecd2631eb6bdb760321632"} Feb 03 06:35:08 crc kubenswrapper[4872]: I0203 06:35:08.724238 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" event={"ID":"c7dd671e-752b-42ed-826a-be9e1bbb8d66","Type":"ContainerStarted","Data":"34655ac512967fa027681d1d48c6187d312113ad528b34783ab0b4e4fa206b01"} Feb 03 06:35:08 crc kubenswrapper[4872]: I0203 06:35:08.758784 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" podStartSLOduration=2.6183766029999997 podStartE2EDuration="2.758640324s" podCreationTimestamp="2026-02-03 06:35:06 +0000 UTC" firstStartedPulling="2026-02-03 06:35:07.716618811 +0000 UTC m=+2078.299310225" lastFinishedPulling="2026-02-03 06:35:07.856882522 +0000 UTC m=+2078.439573946" observedRunningTime="2026-02-03 06:35:08.74757711 +0000 UTC m=+2079.330268514" watchObservedRunningTime="2026-02-03 06:35:08.758640324 +0000 UTC m=+2079.341331758" Feb 03 06:35:31 crc kubenswrapper[4872]: I0203 06:35:31.271498 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:35:31 crc kubenswrapper[4872]: I0203 06:35:31.272074 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:35:58 crc kubenswrapper[4872]: I0203 06:35:58.175912 4872 generic.go:334] "Generic (PLEG): container finished" podID="c7dd671e-752b-42ed-826a-be9e1bbb8d66" containerID="8005196b64506664453a422914dff8aaef9e8cb405ecd2631eb6bdb760321632" exitCode=0 Feb 03 06:35:58 crc kubenswrapper[4872]: I0203 06:35:58.175983 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" event={"ID":"c7dd671e-752b-42ed-826a-be9e1bbb8d66","Type":"ContainerDied","Data":"8005196b64506664453a422914dff8aaef9e8cb405ecd2631eb6bdb760321632"} Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.712340 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.820924 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-ssh-key-openstack-edpm-ipam\") pod \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.821029 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.821174 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-inventory\") pod \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.822001 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-metadata-combined-ca-bundle\") pod \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.822058 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65f7n\" (UniqueName: \"kubernetes.io/projected/c7dd671e-752b-42ed-826a-be9e1bbb8d66-kube-api-access-65f7n\") pod \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.822115 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-nova-metadata-neutron-config-0\") pod \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\" (UID: \"c7dd671e-752b-42ed-826a-be9e1bbb8d66\") " Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.848181 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7dd671e-752b-42ed-826a-be9e1bbb8d66-kube-api-access-65f7n" (OuterVolumeSpecName: "kube-api-access-65f7n") pod "c7dd671e-752b-42ed-826a-be9e1bbb8d66" (UID: "c7dd671e-752b-42ed-826a-be9e1bbb8d66"). InnerVolumeSpecName "kube-api-access-65f7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.848501 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c7dd671e-752b-42ed-826a-be9e1bbb8d66" (UID: "c7dd671e-752b-42ed-826a-be9e1bbb8d66"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.861608 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-inventory" (OuterVolumeSpecName: "inventory") pod "c7dd671e-752b-42ed-826a-be9e1bbb8d66" (UID: "c7dd671e-752b-42ed-826a-be9e1bbb8d66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.862375 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c7dd671e-752b-42ed-826a-be9e1bbb8d66" (UID: "c7dd671e-752b-42ed-826a-be9e1bbb8d66"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.870799 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c7dd671e-752b-42ed-826a-be9e1bbb8d66" (UID: "c7dd671e-752b-42ed-826a-be9e1bbb8d66"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.877990 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c7dd671e-752b-42ed-826a-be9e1bbb8d66" (UID: "c7dd671e-752b-42ed-826a-be9e1bbb8d66"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.925200 4872 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.925244 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.925260 4872 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.925276 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65f7n\" (UniqueName: \"kubernetes.io/projected/c7dd671e-752b-42ed-826a-be9e1bbb8d66-kube-api-access-65f7n\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.925292 4872 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:35:59 crc kubenswrapper[4872]: I0203 06:35:59.925304 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7dd671e-752b-42ed-826a-be9e1bbb8d66-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.201050 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" event={"ID":"c7dd671e-752b-42ed-826a-be9e1bbb8d66","Type":"ContainerDied","Data":"34655ac512967fa027681d1d48c6187d312113ad528b34783ab0b4e4fa206b01"} Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.201092 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34655ac512967fa027681d1d48c6187d312113ad528b34783ab0b4e4fa206b01" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.201163 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.408863 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs"] Feb 03 06:36:00 crc kubenswrapper[4872]: E0203 06:36:00.409324 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7dd671e-752b-42ed-826a-be9e1bbb8d66" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.409348 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7dd671e-752b-42ed-826a-be9e1bbb8d66" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.409598 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7dd671e-752b-42ed-826a-be9e1bbb8d66" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.410383 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.415147 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.415448 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.415223 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.415289 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.415320 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.419471 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs"] Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.540226 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.540279 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.540436 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.540651 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgzbx\" (UniqueName: \"kubernetes.io/projected/3d7243a2-dd2b-4462-8313-92e68450f743-kube-api-access-mgzbx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.540798 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.642640 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.642729 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.642757 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.642805 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.642842 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgzbx\" (UniqueName: \"kubernetes.io/projected/3d7243a2-dd2b-4462-8313-92e68450f743-kube-api-access-mgzbx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.647600 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.647717 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.659094 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.663022 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.666060 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgzbx\" (UniqueName: \"kubernetes.io/projected/3d7243a2-dd2b-4462-8313-92e68450f743-kube-api-access-mgzbx\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:00 crc kubenswrapper[4872]: I0203 06:36:00.740870 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:36:01 crc kubenswrapper[4872]: I0203 06:36:01.271742 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:36:01 crc kubenswrapper[4872]: I0203 06:36:01.272052 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:36:01 crc kubenswrapper[4872]: I0203 06:36:01.336048 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs"] Feb 03 06:36:02 crc kubenswrapper[4872]: I0203 06:36:02.223075 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" event={"ID":"3d7243a2-dd2b-4462-8313-92e68450f743","Type":"ContainerStarted","Data":"0ecdbaf9b672c75743741b297f52b5b459894f31571d1326fe657443e32901b5"} Feb 03 06:36:02 crc kubenswrapper[4872]: I0203 06:36:02.223395 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" event={"ID":"3d7243a2-dd2b-4462-8313-92e68450f743","Type":"ContainerStarted","Data":"96d0bbd8aed2992f3d3c49d095a2b61430f7acd21116cb218310b624a5108658"} Feb 03 06:36:02 crc kubenswrapper[4872]: I0203 06:36:02.260357 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" podStartSLOduration=2.112592287 podStartE2EDuration="2.260329106s" podCreationTimestamp="2026-02-03 06:36:00 +0000 UTC" firstStartedPulling="2026-02-03 06:36:01.339267032 +0000 UTC m=+2131.921958446" lastFinishedPulling="2026-02-03 06:36:01.487003851 +0000 UTC m=+2132.069695265" observedRunningTime="2026-02-03 06:36:02.248414771 +0000 UTC m=+2132.831106195" watchObservedRunningTime="2026-02-03 06:36:02.260329106 +0000 UTC m=+2132.843020550" Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.271884 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.272376 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.272417 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.273094 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.273138 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" gracePeriod=600 Feb 03 06:36:31 crc kubenswrapper[4872]: E0203 06:36:31.392408 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.469226 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" exitCode=0 Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.469275 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988"} Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.469307 4872 scope.go:117] "RemoveContainer" containerID="b81097029afc1ad6b47c80497badb62e78be48a3a9a40312044d79d9f688e3ba" Feb 03 06:36:31 crc kubenswrapper[4872]: I0203 06:36:31.469890 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:36:31 crc kubenswrapper[4872]: E0203 06:36:31.470107 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.702550 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-chv9d"] Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.705055 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.746509 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42m8\" (UniqueName: \"kubernetes.io/projected/acd31a8d-b5af-408b-bc36-983b0cc56d1e-kube-api-access-n42m8\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.746560 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-utilities\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.746605 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-catalog-content\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.757104 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv9d"] Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.848151 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42m8\" (UniqueName: \"kubernetes.io/projected/acd31a8d-b5af-408b-bc36-983b0cc56d1e-kube-api-access-n42m8\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.848218 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-utilities\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.848275 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-catalog-content\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.848912 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-catalog-content\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.849003 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-utilities\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:33 crc kubenswrapper[4872]: I0203 06:36:33.869508 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42m8\" (UniqueName: \"kubernetes.io/projected/acd31a8d-b5af-408b-bc36-983b0cc56d1e-kube-api-access-n42m8\") pod \"redhat-marketplace-chv9d\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:34 crc kubenswrapper[4872]: I0203 06:36:34.037309 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:34 crc kubenswrapper[4872]: I0203 06:36:34.516432 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv9d"] Feb 03 06:36:35 crc kubenswrapper[4872]: I0203 06:36:35.512336 4872 generic.go:334] "Generic (PLEG): container finished" podID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerID="f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2" exitCode=0 Feb 03 06:36:35 crc kubenswrapper[4872]: I0203 06:36:35.512433 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv9d" event={"ID":"acd31a8d-b5af-408b-bc36-983b0cc56d1e","Type":"ContainerDied","Data":"f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2"} Feb 03 06:36:35 crc kubenswrapper[4872]: I0203 06:36:35.512675 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv9d" event={"ID":"acd31a8d-b5af-408b-bc36-983b0cc56d1e","Type":"ContainerStarted","Data":"9f23ce1841387ebf1cfc8cae9c268e194bb200c5bf93190ee98c071ab705e0fd"} Feb 03 06:36:36 crc kubenswrapper[4872]: I0203 06:36:36.524257 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv9d" event={"ID":"acd31a8d-b5af-408b-bc36-983b0cc56d1e","Type":"ContainerStarted","Data":"d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14"} Feb 03 06:36:37 crc kubenswrapper[4872]: I0203 06:36:37.535872 4872 generic.go:334] "Generic (PLEG): container finished" podID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerID="d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14" exitCode=0 Feb 03 06:36:37 crc kubenswrapper[4872]: I0203 06:36:37.535964 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv9d" event={"ID":"acd31a8d-b5af-408b-bc36-983b0cc56d1e","Type":"ContainerDied","Data":"d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14"} Feb 03 06:36:38 crc kubenswrapper[4872]: I0203 06:36:38.547984 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv9d" event={"ID":"acd31a8d-b5af-408b-bc36-983b0cc56d1e","Type":"ContainerStarted","Data":"f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9"} Feb 03 06:36:38 crc kubenswrapper[4872]: I0203 06:36:38.575177 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-chv9d" podStartSLOduration=3.106020632 podStartE2EDuration="5.57515794s" podCreationTimestamp="2026-02-03 06:36:33 +0000 UTC" firstStartedPulling="2026-02-03 06:36:35.518863135 +0000 UTC m=+2166.101554549" lastFinishedPulling="2026-02-03 06:36:37.988000403 +0000 UTC m=+2168.570691857" observedRunningTime="2026-02-03 06:36:38.571298098 +0000 UTC m=+2169.153989512" watchObservedRunningTime="2026-02-03 06:36:38.57515794 +0000 UTC m=+2169.157849354" Feb 03 06:36:44 crc kubenswrapper[4872]: I0203 06:36:44.038346 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:44 crc kubenswrapper[4872]: I0203 06:36:44.039074 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:44 crc kubenswrapper[4872]: I0203 06:36:44.097414 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:44 crc kubenswrapper[4872]: I0203 06:36:44.123655 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:36:44 crc kubenswrapper[4872]: E0203 06:36:44.123982 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:36:44 crc kubenswrapper[4872]: I0203 06:36:44.679967 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:44 crc kubenswrapper[4872]: I0203 06:36:44.750053 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv9d"] Feb 03 06:36:46 crc kubenswrapper[4872]: I0203 06:36:46.627444 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-chv9d" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerName="registry-server" containerID="cri-o://f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9" gracePeriod=2 Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.177471 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.358415 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n42m8\" (UniqueName: \"kubernetes.io/projected/acd31a8d-b5af-408b-bc36-983b0cc56d1e-kube-api-access-n42m8\") pod \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.358799 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-catalog-content\") pod \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.358823 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-utilities\") pod \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\" (UID: \"acd31a8d-b5af-408b-bc36-983b0cc56d1e\") " Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.359714 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-utilities" (OuterVolumeSpecName: "utilities") pod "acd31a8d-b5af-408b-bc36-983b0cc56d1e" (UID: "acd31a8d-b5af-408b-bc36-983b0cc56d1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.364406 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd31a8d-b5af-408b-bc36-983b0cc56d1e-kube-api-access-n42m8" (OuterVolumeSpecName: "kube-api-access-n42m8") pod "acd31a8d-b5af-408b-bc36-983b0cc56d1e" (UID: "acd31a8d-b5af-408b-bc36-983b0cc56d1e"). InnerVolumeSpecName "kube-api-access-n42m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.382216 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acd31a8d-b5af-408b-bc36-983b0cc56d1e" (UID: "acd31a8d-b5af-408b-bc36-983b0cc56d1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.461345 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.461400 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acd31a8d-b5af-408b-bc36-983b0cc56d1e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.461424 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n42m8\" (UniqueName: \"kubernetes.io/projected/acd31a8d-b5af-408b-bc36-983b0cc56d1e-kube-api-access-n42m8\") on node \"crc\" DevicePath \"\"" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.639607 4872 generic.go:334] "Generic (PLEG): container finished" podID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerID="f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9" exitCode=0 Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.639637 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv9d" event={"ID":"acd31a8d-b5af-408b-bc36-983b0cc56d1e","Type":"ContainerDied","Data":"f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9"} Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.639712 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv9d" event={"ID":"acd31a8d-b5af-408b-bc36-983b0cc56d1e","Type":"ContainerDied","Data":"9f23ce1841387ebf1cfc8cae9c268e194bb200c5bf93190ee98c071ab705e0fd"} Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.639742 4872 scope.go:117] "RemoveContainer" containerID="f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.639738 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chv9d" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.661853 4872 scope.go:117] "RemoveContainer" containerID="d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.688958 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv9d"] Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.700316 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv9d"] Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.700639 4872 scope.go:117] "RemoveContainer" containerID="f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.743975 4872 scope.go:117] "RemoveContainer" containerID="f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9" Feb 03 06:36:47 crc kubenswrapper[4872]: E0203 06:36:47.744759 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9\": container with ID starting with f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9 not found: ID does not exist" containerID="f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.744813 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9"} err="failed to get container status \"f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9\": rpc error: code = NotFound desc = could not find container \"f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9\": container with ID starting with f7dd480d835d1c04a1c3026fd2bff98b06d5c9f96a4c91b88fe231d8ba5f51b9 not found: ID does not exist" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.744847 4872 scope.go:117] "RemoveContainer" containerID="d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14" Feb 03 06:36:47 crc kubenswrapper[4872]: E0203 06:36:47.745226 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14\": container with ID starting with d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14 not found: ID does not exist" containerID="d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.745265 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14"} err="failed to get container status \"d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14\": rpc error: code = NotFound desc = could not find container \"d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14\": container with ID starting with d8a5d6c223470da265f265ce220b2532c33db1fe42f97e17be3a5819b3dd4f14 not found: ID does not exist" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.745293 4872 scope.go:117] "RemoveContainer" containerID="f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2" Feb 03 06:36:47 crc kubenswrapper[4872]: E0203 06:36:47.745595 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2\": container with ID starting with f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2 not found: ID does not exist" containerID="f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2" Feb 03 06:36:47 crc kubenswrapper[4872]: I0203 06:36:47.745626 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2"} err="failed to get container status \"f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2\": rpc error: code = NotFound desc = could not find container \"f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2\": container with ID starting with f28ecaa74c85e525d8c6b89393712ac318e6bb4e8718858ef4534aa3dc202fd2 not found: ID does not exist" Feb 03 06:36:48 crc kubenswrapper[4872]: I0203 06:36:48.135403 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" path="/var/lib/kubelet/pods/acd31a8d-b5af-408b-bc36-983b0cc56d1e/volumes" Feb 03 06:36:55 crc kubenswrapper[4872]: I0203 06:36:55.123020 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:36:55 crc kubenswrapper[4872]: E0203 06:36:55.123563 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.374309 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fs57x"] Feb 03 06:37:04 crc kubenswrapper[4872]: E0203 06:37:04.375331 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerName="registry-server" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.375345 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerName="registry-server" Feb 03 06:37:04 crc kubenswrapper[4872]: E0203 06:37:04.375361 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerName="extract-content" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.375374 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerName="extract-content" Feb 03 06:37:04 crc kubenswrapper[4872]: E0203 06:37:04.375392 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerName="extract-utilities" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.375398 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerName="extract-utilities" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.375574 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd31a8d-b5af-408b-bc36-983b0cc56d1e" containerName="registry-server" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.376925 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.388652 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fs57x"] Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.478551 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpqq\" (UniqueName: \"kubernetes.io/projected/9554ddf7-034d-44ef-bf00-3f68463a4e12-kube-api-access-jgpqq\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.479000 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-catalog-content\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.479136 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-utilities\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.581389 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-catalog-content\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.581795 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-utilities\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.581879 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgpqq\" (UniqueName: \"kubernetes.io/projected/9554ddf7-034d-44ef-bf00-3f68463a4e12-kube-api-access-jgpqq\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.581940 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-catalog-content\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.582569 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-utilities\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.610967 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgpqq\" (UniqueName: \"kubernetes.io/projected/9554ddf7-034d-44ef-bf00-3f68463a4e12-kube-api-access-jgpqq\") pod \"redhat-operators-fs57x\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:04 crc kubenswrapper[4872]: I0203 06:37:04.705671 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:05 crc kubenswrapper[4872]: I0203 06:37:05.242094 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fs57x"] Feb 03 06:37:05 crc kubenswrapper[4872]: I0203 06:37:05.826635 4872 generic.go:334] "Generic (PLEG): container finished" podID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerID="febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab" exitCode=0 Feb 03 06:37:05 crc kubenswrapper[4872]: I0203 06:37:05.826739 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs57x" event={"ID":"9554ddf7-034d-44ef-bf00-3f68463a4e12","Type":"ContainerDied","Data":"febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab"} Feb 03 06:37:05 crc kubenswrapper[4872]: I0203 06:37:05.827069 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs57x" event={"ID":"9554ddf7-034d-44ef-bf00-3f68463a4e12","Type":"ContainerStarted","Data":"a093faeebd7a41739395f3b14bf4d2ac95927801eeaffee56f3d1cf849e57c6e"} Feb 03 06:37:05 crc kubenswrapper[4872]: I0203 06:37:05.828547 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:37:06 crc kubenswrapper[4872]: I0203 06:37:06.840616 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs57x" event={"ID":"9554ddf7-034d-44ef-bf00-3f68463a4e12","Type":"ContainerStarted","Data":"6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed"} Feb 03 06:37:09 crc kubenswrapper[4872]: I0203 06:37:09.147138 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:37:09 crc kubenswrapper[4872]: E0203 06:37:09.147831 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:37:11 crc kubenswrapper[4872]: I0203 06:37:11.906069 4872 generic.go:334] "Generic (PLEG): container finished" podID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerID="6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed" exitCode=0 Feb 03 06:37:11 crc kubenswrapper[4872]: I0203 06:37:11.906104 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs57x" event={"ID":"9554ddf7-034d-44ef-bf00-3f68463a4e12","Type":"ContainerDied","Data":"6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed"} Feb 03 06:37:12 crc kubenswrapper[4872]: I0203 06:37:12.922736 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs57x" event={"ID":"9554ddf7-034d-44ef-bf00-3f68463a4e12","Type":"ContainerStarted","Data":"93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494"} Feb 03 06:37:12 crc kubenswrapper[4872]: I0203 06:37:12.953591 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fs57x" podStartSLOduration=2.430136644 podStartE2EDuration="8.953566735s" podCreationTimestamp="2026-02-03 06:37:04 +0000 UTC" firstStartedPulling="2026-02-03 06:37:05.828346077 +0000 UTC m=+2196.411037491" lastFinishedPulling="2026-02-03 06:37:12.351776168 +0000 UTC m=+2202.934467582" observedRunningTime="2026-02-03 06:37:12.949238221 +0000 UTC m=+2203.531929675" watchObservedRunningTime="2026-02-03 06:37:12.953566735 +0000 UTC m=+2203.536258199" Feb 03 06:37:14 crc kubenswrapper[4872]: I0203 06:37:14.706932 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:14 crc kubenswrapper[4872]: I0203 06:37:14.707264 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:15 crc kubenswrapper[4872]: I0203 06:37:15.749961 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fs57x" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="registry-server" probeResult="failure" output=< Feb 03 06:37:15 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:37:15 crc kubenswrapper[4872]: > Feb 03 06:37:24 crc kubenswrapper[4872]: I0203 06:37:24.123478 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:37:24 crc kubenswrapper[4872]: E0203 06:37:24.125225 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:37:24 crc kubenswrapper[4872]: I0203 06:37:24.769219 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:24 crc kubenswrapper[4872]: I0203 06:37:24.852181 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:25 crc kubenswrapper[4872]: I0203 06:37:25.017856 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fs57x"] Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.238081 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fs57x" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="registry-server" containerID="cri-o://93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494" gracePeriod=2 Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.733334 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.864764 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-catalog-content\") pod \"9554ddf7-034d-44ef-bf00-3f68463a4e12\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.864818 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-utilities\") pod \"9554ddf7-034d-44ef-bf00-3f68463a4e12\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.864840 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgpqq\" (UniqueName: \"kubernetes.io/projected/9554ddf7-034d-44ef-bf00-3f68463a4e12-kube-api-access-jgpqq\") pod \"9554ddf7-034d-44ef-bf00-3f68463a4e12\" (UID: \"9554ddf7-034d-44ef-bf00-3f68463a4e12\") " Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.866592 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-utilities" (OuterVolumeSpecName: "utilities") pod "9554ddf7-034d-44ef-bf00-3f68463a4e12" (UID: "9554ddf7-034d-44ef-bf00-3f68463a4e12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.870618 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9554ddf7-034d-44ef-bf00-3f68463a4e12-kube-api-access-jgpqq" (OuterVolumeSpecName: "kube-api-access-jgpqq") pod "9554ddf7-034d-44ef-bf00-3f68463a4e12" (UID: "9554ddf7-034d-44ef-bf00-3f68463a4e12"). InnerVolumeSpecName "kube-api-access-jgpqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.967260 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:37:26 crc kubenswrapper[4872]: I0203 06:37:26.967298 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgpqq\" (UniqueName: \"kubernetes.io/projected/9554ddf7-034d-44ef-bf00-3f68463a4e12-kube-api-access-jgpqq\") on node \"crc\" DevicePath \"\"" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.032954 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9554ddf7-034d-44ef-bf00-3f68463a4e12" (UID: "9554ddf7-034d-44ef-bf00-3f68463a4e12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.069183 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9554ddf7-034d-44ef-bf00-3f68463a4e12-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.255881 4872 generic.go:334] "Generic (PLEG): container finished" podID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerID="93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494" exitCode=0 Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.255974 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs57x" event={"ID":"9554ddf7-034d-44ef-bf00-3f68463a4e12","Type":"ContainerDied","Data":"93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494"} Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.256029 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs57x" event={"ID":"9554ddf7-034d-44ef-bf00-3f68463a4e12","Type":"ContainerDied","Data":"a093faeebd7a41739395f3b14bf4d2ac95927801eeaffee56f3d1cf849e57c6e"} Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.256070 4872 scope.go:117] "RemoveContainer" containerID="93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.256084 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fs57x" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.286266 4872 scope.go:117] "RemoveContainer" containerID="6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.320677 4872 scope.go:117] "RemoveContainer" containerID="febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.335276 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fs57x"] Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.350359 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fs57x"] Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.396495 4872 scope.go:117] "RemoveContainer" containerID="93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494" Feb 03 06:37:27 crc kubenswrapper[4872]: E0203 06:37:27.396968 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494\": container with ID starting with 93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494 not found: ID does not exist" containerID="93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.397001 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494"} err="failed to get container status \"93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494\": rpc error: code = NotFound desc = could not find container \"93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494\": container with ID starting with 93ac9d48db59ef7f31ec7b40221b8636c2d18c18949b0c1a0d144f90acc4f494 not found: ID does not exist" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.397022 4872 scope.go:117] "RemoveContainer" containerID="6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed" Feb 03 06:37:27 crc kubenswrapper[4872]: E0203 06:37:27.397362 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed\": container with ID starting with 6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed not found: ID does not exist" containerID="6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.397400 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed"} err="failed to get container status \"6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed\": rpc error: code = NotFound desc = could not find container \"6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed\": container with ID starting with 6f4e2223021f161cc0aaee19e6e162aa4c7279be8aa4c7b4c80a0dc780ae25ed not found: ID does not exist" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.397425 4872 scope.go:117] "RemoveContainer" containerID="febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab" Feb 03 06:37:27 crc kubenswrapper[4872]: E0203 06:37:27.397872 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab\": container with ID starting with febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab not found: ID does not exist" containerID="febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab" Feb 03 06:37:27 crc kubenswrapper[4872]: I0203 06:37:27.397971 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab"} err="failed to get container status \"febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab\": rpc error: code = NotFound desc = could not find container \"febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab\": container with ID starting with febbb12c32220908859b3c2830b8ec0dd36e92067b2662226a7d41e3062c4aab not found: ID does not exist" Feb 03 06:37:28 crc kubenswrapper[4872]: I0203 06:37:28.141533 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" path="/var/lib/kubelet/pods/9554ddf7-034d-44ef-bf00-3f68463a4e12/volumes" Feb 03 06:37:38 crc kubenswrapper[4872]: I0203 06:37:38.124106 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:37:38 crc kubenswrapper[4872]: E0203 06:37:38.125505 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:37:50 crc kubenswrapper[4872]: I0203 06:37:50.136457 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:37:50 crc kubenswrapper[4872]: E0203 06:37:50.138567 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:38:04 crc kubenswrapper[4872]: I0203 06:38:04.122593 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:38:04 crc kubenswrapper[4872]: E0203 06:38:04.123374 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:38:19 crc kubenswrapper[4872]: I0203 06:38:19.123322 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:38:19 crc kubenswrapper[4872]: E0203 06:38:19.124216 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:38:34 crc kubenswrapper[4872]: I0203 06:38:34.123616 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:38:34 crc kubenswrapper[4872]: E0203 06:38:34.124590 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:38:47 crc kubenswrapper[4872]: I0203 06:38:47.123612 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:38:47 crc kubenswrapper[4872]: E0203 06:38:47.125068 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.463098 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-49xxm"] Feb 03 06:38:59 crc kubenswrapper[4872]: E0203 06:38:59.464132 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="extract-utilities" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.464148 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="extract-utilities" Feb 03 06:38:59 crc kubenswrapper[4872]: E0203 06:38:59.464167 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="registry-server" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.464175 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="registry-server" Feb 03 06:38:59 crc kubenswrapper[4872]: E0203 06:38:59.464213 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="extract-content" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.464222 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="extract-content" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.464440 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9554ddf7-034d-44ef-bf00-3f68463a4e12" containerName="registry-server" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.466171 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.480593 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-49xxm"] Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.617778 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-utilities\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.617826 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-catalog-content\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.617850 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r27r\" (UniqueName: \"kubernetes.io/projected/29213ca1-122b-4e9d-8ec6-46d380624bec-kube-api-access-4r27r\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.719103 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-utilities\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.719393 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-catalog-content\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.719491 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r27r\" (UniqueName: \"kubernetes.io/projected/29213ca1-122b-4e9d-8ec6-46d380624bec-kube-api-access-4r27r\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.719763 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-utilities\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.720300 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-catalog-content\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.742283 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r27r\" (UniqueName: \"kubernetes.io/projected/29213ca1-122b-4e9d-8ec6-46d380624bec-kube-api-access-4r27r\") pod \"certified-operators-49xxm\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:38:59 crc kubenswrapper[4872]: I0203 06:38:59.788310 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:39:00 crc kubenswrapper[4872]: I0203 06:39:00.128755 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:39:00 crc kubenswrapper[4872]: E0203 06:39:00.129293 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:39:00 crc kubenswrapper[4872]: I0203 06:39:00.343562 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-49xxm"] Feb 03 06:39:01 crc kubenswrapper[4872]: I0203 06:39:01.157159 4872 generic.go:334] "Generic (PLEG): container finished" podID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerID="fd944d34b0d7ee911945b94fd84a9ea0831c4f651daa5e2c1d29975a4d16163f" exitCode=0 Feb 03 06:39:01 crc kubenswrapper[4872]: I0203 06:39:01.157419 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49xxm" event={"ID":"29213ca1-122b-4e9d-8ec6-46d380624bec","Type":"ContainerDied","Data":"fd944d34b0d7ee911945b94fd84a9ea0831c4f651daa5e2c1d29975a4d16163f"} Feb 03 06:39:01 crc kubenswrapper[4872]: I0203 06:39:01.157737 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49xxm" event={"ID":"29213ca1-122b-4e9d-8ec6-46d380624bec","Type":"ContainerStarted","Data":"3ecfbc4ffa317f76d7db90a11adfc309ea50de786a5f5e928dbabf92716fd7af"} Feb 03 06:39:02 crc kubenswrapper[4872]: I0203 06:39:02.167672 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49xxm" event={"ID":"29213ca1-122b-4e9d-8ec6-46d380624bec","Type":"ContainerStarted","Data":"71573c6f47cb19c1db4238c739d64dd889e196af1b0bb4c307ceb745f03021df"} Feb 03 06:39:03 crc kubenswrapper[4872]: I0203 06:39:03.187414 4872 generic.go:334] "Generic (PLEG): container finished" podID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerID="71573c6f47cb19c1db4238c739d64dd889e196af1b0bb4c307ceb745f03021df" exitCode=0 Feb 03 06:39:03 crc kubenswrapper[4872]: I0203 06:39:03.187485 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49xxm" event={"ID":"29213ca1-122b-4e9d-8ec6-46d380624bec","Type":"ContainerDied","Data":"71573c6f47cb19c1db4238c739d64dd889e196af1b0bb4c307ceb745f03021df"} Feb 03 06:39:04 crc kubenswrapper[4872]: I0203 06:39:04.197786 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49xxm" event={"ID":"29213ca1-122b-4e9d-8ec6-46d380624bec","Type":"ContainerStarted","Data":"1bbec60945d12e8a1813af936b5cc128adbb873ebb1a5d267ee0ee3730317ca3"} Feb 03 06:39:04 crc kubenswrapper[4872]: I0203 06:39:04.224397 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-49xxm" podStartSLOduration=2.56252015 podStartE2EDuration="5.224372152s" podCreationTimestamp="2026-02-03 06:38:59 +0000 UTC" firstStartedPulling="2026-02-03 06:39:01.159383238 +0000 UTC m=+2311.742074652" lastFinishedPulling="2026-02-03 06:39:03.82123522 +0000 UTC m=+2314.403926654" observedRunningTime="2026-02-03 06:39:04.220417078 +0000 UTC m=+2314.803108512" watchObservedRunningTime="2026-02-03 06:39:04.224372152 +0000 UTC m=+2314.807063576" Feb 03 06:39:09 crc kubenswrapper[4872]: I0203 06:39:09.789669 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:39:09 crc kubenswrapper[4872]: I0203 06:39:09.790230 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:39:09 crc kubenswrapper[4872]: I0203 06:39:09.846752 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:39:10 crc kubenswrapper[4872]: I0203 06:39:10.315641 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:39:10 crc kubenswrapper[4872]: I0203 06:39:10.363128 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-49xxm"] Feb 03 06:39:12 crc kubenswrapper[4872]: I0203 06:39:12.272035 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-49xxm" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerName="registry-server" containerID="cri-o://1bbec60945d12e8a1813af936b5cc128adbb873ebb1a5d267ee0ee3730317ca3" gracePeriod=2 Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.281177 4872 generic.go:334] "Generic (PLEG): container finished" podID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerID="1bbec60945d12e8a1813af936b5cc128adbb873ebb1a5d267ee0ee3730317ca3" exitCode=0 Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.281254 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49xxm" event={"ID":"29213ca1-122b-4e9d-8ec6-46d380624bec","Type":"ContainerDied","Data":"1bbec60945d12e8a1813af936b5cc128adbb873ebb1a5d267ee0ee3730317ca3"} Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.281490 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49xxm" event={"ID":"29213ca1-122b-4e9d-8ec6-46d380624bec","Type":"ContainerDied","Data":"3ecfbc4ffa317f76d7db90a11adfc309ea50de786a5f5e928dbabf92716fd7af"} Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.281506 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ecfbc4ffa317f76d7db90a11adfc309ea50de786a5f5e928dbabf92716fd7af" Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.309713 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.460750 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r27r\" (UniqueName: \"kubernetes.io/projected/29213ca1-122b-4e9d-8ec6-46d380624bec-kube-api-access-4r27r\") pod \"29213ca1-122b-4e9d-8ec6-46d380624bec\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.460788 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-catalog-content\") pod \"29213ca1-122b-4e9d-8ec6-46d380624bec\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.460849 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-utilities\") pod \"29213ca1-122b-4e9d-8ec6-46d380624bec\" (UID: \"29213ca1-122b-4e9d-8ec6-46d380624bec\") " Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.461883 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-utilities" (OuterVolumeSpecName: "utilities") pod "29213ca1-122b-4e9d-8ec6-46d380624bec" (UID: "29213ca1-122b-4e9d-8ec6-46d380624bec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.474006 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29213ca1-122b-4e9d-8ec6-46d380624bec-kube-api-access-4r27r" (OuterVolumeSpecName: "kube-api-access-4r27r") pod "29213ca1-122b-4e9d-8ec6-46d380624bec" (UID: "29213ca1-122b-4e9d-8ec6-46d380624bec"). InnerVolumeSpecName "kube-api-access-4r27r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.508237 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29213ca1-122b-4e9d-8ec6-46d380624bec" (UID: "29213ca1-122b-4e9d-8ec6-46d380624bec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.563635 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r27r\" (UniqueName: \"kubernetes.io/projected/29213ca1-122b-4e9d-8ec6-46d380624bec-kube-api-access-4r27r\") on node \"crc\" DevicePath \"\"" Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.563683 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:39:13 crc kubenswrapper[4872]: I0203 06:39:13.563727 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29213ca1-122b-4e9d-8ec6-46d380624bec-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:39:14 crc kubenswrapper[4872]: I0203 06:39:14.122838 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:39:14 crc kubenswrapper[4872]: E0203 06:39:14.123789 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:39:14 crc kubenswrapper[4872]: I0203 06:39:14.290085 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49xxm" Feb 03 06:39:14 crc kubenswrapper[4872]: I0203 06:39:14.317585 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-49xxm"] Feb 03 06:39:14 crc kubenswrapper[4872]: I0203 06:39:14.326938 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-49xxm"] Feb 03 06:39:16 crc kubenswrapper[4872]: I0203 06:39:16.140579 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" path="/var/lib/kubelet/pods/29213ca1-122b-4e9d-8ec6-46d380624bec/volumes" Feb 03 06:39:29 crc kubenswrapper[4872]: I0203 06:39:29.123563 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:39:29 crc kubenswrapper[4872]: E0203 06:39:29.124856 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:39:41 crc kubenswrapper[4872]: I0203 06:39:41.123651 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:39:41 crc kubenswrapper[4872]: E0203 06:39:41.127131 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.665906 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7m9zv"] Feb 03 06:39:46 crc kubenswrapper[4872]: E0203 06:39:46.667239 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerName="extract-content" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.667256 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerName="extract-content" Feb 03 06:39:46 crc kubenswrapper[4872]: E0203 06:39:46.667274 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerName="extract-utilities" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.667281 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerName="extract-utilities" Feb 03 06:39:46 crc kubenswrapper[4872]: E0203 06:39:46.667292 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerName="registry-server" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.667297 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerName="registry-server" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.667464 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="29213ca1-122b-4e9d-8ec6-46d380624bec" containerName="registry-server" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.669981 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.686311 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7m9zv"] Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.788606 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-catalog-content\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.788839 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-utilities\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.788984 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2df8\" (UniqueName: \"kubernetes.io/projected/c0be7748-42cb-4be0-94e9-afff78fa7750-kube-api-access-b2df8\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.890169 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-utilities\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.890276 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2df8\" (UniqueName: \"kubernetes.io/projected/c0be7748-42cb-4be0-94e9-afff78fa7750-kube-api-access-b2df8\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.890346 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-catalog-content\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.890715 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-utilities\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.890971 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-catalog-content\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.911227 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2df8\" (UniqueName: \"kubernetes.io/projected/c0be7748-42cb-4be0-94e9-afff78fa7750-kube-api-access-b2df8\") pod \"community-operators-7m9zv\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:46 crc kubenswrapper[4872]: I0203 06:39:46.986816 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:47 crc kubenswrapper[4872]: I0203 06:39:47.533588 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7m9zv"] Feb 03 06:39:47 crc kubenswrapper[4872]: I0203 06:39:47.629584 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7m9zv" event={"ID":"c0be7748-42cb-4be0-94e9-afff78fa7750","Type":"ContainerStarted","Data":"3c97f142c4ffca046498dba97a4f32f19130fafbbd426bbf140922a89d108f45"} Feb 03 06:39:48 crc kubenswrapper[4872]: I0203 06:39:48.641278 4872 generic.go:334] "Generic (PLEG): container finished" podID="3d7243a2-dd2b-4462-8313-92e68450f743" containerID="0ecdbaf9b672c75743741b297f52b5b459894f31571d1326fe657443e32901b5" exitCode=0 Feb 03 06:39:48 crc kubenswrapper[4872]: I0203 06:39:48.641360 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" event={"ID":"3d7243a2-dd2b-4462-8313-92e68450f743","Type":"ContainerDied","Data":"0ecdbaf9b672c75743741b297f52b5b459894f31571d1326fe657443e32901b5"} Feb 03 06:39:48 crc kubenswrapper[4872]: I0203 06:39:48.644057 4872 generic.go:334] "Generic (PLEG): container finished" podID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerID="ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8" exitCode=0 Feb 03 06:39:48 crc kubenswrapper[4872]: I0203 06:39:48.644096 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7m9zv" event={"ID":"c0be7748-42cb-4be0-94e9-afff78fa7750","Type":"ContainerDied","Data":"ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8"} Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.171982 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.303046 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-ssh-key-openstack-edpm-ipam\") pod \"3d7243a2-dd2b-4462-8313-92e68450f743\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.303272 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-secret-0\") pod \"3d7243a2-dd2b-4462-8313-92e68450f743\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.303516 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-inventory\") pod \"3d7243a2-dd2b-4462-8313-92e68450f743\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.303575 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-combined-ca-bundle\") pod \"3d7243a2-dd2b-4462-8313-92e68450f743\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.303651 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgzbx\" (UniqueName: \"kubernetes.io/projected/3d7243a2-dd2b-4462-8313-92e68450f743-kube-api-access-mgzbx\") pod \"3d7243a2-dd2b-4462-8313-92e68450f743\" (UID: \"3d7243a2-dd2b-4462-8313-92e68450f743\") " Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.308476 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3d7243a2-dd2b-4462-8313-92e68450f743" (UID: "3d7243a2-dd2b-4462-8313-92e68450f743"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.309086 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7243a2-dd2b-4462-8313-92e68450f743-kube-api-access-mgzbx" (OuterVolumeSpecName: "kube-api-access-mgzbx") pod "3d7243a2-dd2b-4462-8313-92e68450f743" (UID: "3d7243a2-dd2b-4462-8313-92e68450f743"). InnerVolumeSpecName "kube-api-access-mgzbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.328913 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "3d7243a2-dd2b-4462-8313-92e68450f743" (UID: "3d7243a2-dd2b-4462-8313-92e68450f743"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.338305 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d7243a2-dd2b-4462-8313-92e68450f743" (UID: "3d7243a2-dd2b-4462-8313-92e68450f743"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.341871 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-inventory" (OuterVolumeSpecName: "inventory") pod "3d7243a2-dd2b-4462-8313-92e68450f743" (UID: "3d7243a2-dd2b-4462-8313-92e68450f743"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.405407 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.405437 4872 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.405447 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.405455 4872 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7243a2-dd2b-4462-8313-92e68450f743-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.405465 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgzbx\" (UniqueName: \"kubernetes.io/projected/3d7243a2-dd2b-4462-8313-92e68450f743-kube-api-access-mgzbx\") on node \"crc\" DevicePath \"\"" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.665454 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.666426 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs" event={"ID":"3d7243a2-dd2b-4462-8313-92e68450f743","Type":"ContainerDied","Data":"96d0bbd8aed2992f3d3c49d095a2b61430f7acd21116cb218310b624a5108658"} Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.666472 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d0bbd8aed2992f3d3c49d095a2b61430f7acd21116cb218310b624a5108658" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.669302 4872 generic.go:334] "Generic (PLEG): container finished" podID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerID="e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04" exitCode=0 Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.669351 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7m9zv" event={"ID":"c0be7748-42cb-4be0-94e9-afff78fa7750","Type":"ContainerDied","Data":"e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04"} Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.814082 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw"] Feb 03 06:39:50 crc kubenswrapper[4872]: E0203 06:39:50.814524 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7243a2-dd2b-4462-8313-92e68450f743" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.814545 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7243a2-dd2b-4462-8313-92e68450f743" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.814813 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7243a2-dd2b-4462-8313-92e68450f743" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.815473 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.820624 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.820916 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.821078 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.821287 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.821459 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.821603 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.822378 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.846606 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw"] Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.914916 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.915262 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.915294 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.915318 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.915377 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.915399 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.915703 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.915864 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmfp\" (UniqueName: \"kubernetes.io/projected/74a3f126-057b-4f44-9483-82e6a6a00c90-kube-api-access-cmmfp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:50 crc kubenswrapper[4872]: I0203 06:39:50.916035 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.017985 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.018038 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.018087 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.018129 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmfp\" (UniqueName: \"kubernetes.io/projected/74a3f126-057b-4f44-9483-82e6a6a00c90-kube-api-access-cmmfp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.018162 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.018190 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.018221 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.018250 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.018275 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.019785 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.023023 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.023470 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.024086 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.024099 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.025363 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.027076 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.036224 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.040141 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmfp\" (UniqueName: \"kubernetes.io/projected/74a3f126-057b-4f44-9483-82e6a6a00c90-kube-api-access-cmmfp\") pod \"nova-edpm-deployment-openstack-edpm-ipam-m5jcw\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.152109 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.680549 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7m9zv" event={"ID":"c0be7748-42cb-4be0-94e9-afff78fa7750","Type":"ContainerStarted","Data":"cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb"} Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.712051 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7m9zv" podStartSLOduration=3.287854511 podStartE2EDuration="5.712032413s" podCreationTimestamp="2026-02-03 06:39:46 +0000 UTC" firstStartedPulling="2026-02-03 06:39:48.646305184 +0000 UTC m=+2359.228996598" lastFinishedPulling="2026-02-03 06:39:51.070483086 +0000 UTC m=+2361.653174500" observedRunningTime="2026-02-03 06:39:51.70683763 +0000 UTC m=+2362.289529034" watchObservedRunningTime="2026-02-03 06:39:51.712032413 +0000 UTC m=+2362.294723827" Feb 03 06:39:51 crc kubenswrapper[4872]: W0203 06:39:51.757505 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74a3f126_057b_4f44_9483_82e6a6a00c90.slice/crio-b69181dbc0b3c68833ba6da90d627c4c973069cd3151c3adcbe57939e0e3b12f WatchSource:0}: Error finding container b69181dbc0b3c68833ba6da90d627c4c973069cd3151c3adcbe57939e0e3b12f: Status 404 returned error can't find the container with id b69181dbc0b3c68833ba6da90d627c4c973069cd3151c3adcbe57939e0e3b12f Feb 03 06:39:51 crc kubenswrapper[4872]: I0203 06:39:51.760651 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw"] Feb 03 06:39:52 crc kubenswrapper[4872]: I0203 06:39:52.691004 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" event={"ID":"74a3f126-057b-4f44-9483-82e6a6a00c90","Type":"ContainerStarted","Data":"85fc37e4186679e43ac5661b0d5be2658e411dbbd3653e1493d4ffa89e3fbbc9"} Feb 03 06:39:52 crc kubenswrapper[4872]: I0203 06:39:52.691510 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" event={"ID":"74a3f126-057b-4f44-9483-82e6a6a00c90","Type":"ContainerStarted","Data":"b69181dbc0b3c68833ba6da90d627c4c973069cd3151c3adcbe57939e0e3b12f"} Feb 03 06:39:52 crc kubenswrapper[4872]: I0203 06:39:52.711490 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" podStartSLOduration=2.525008781 podStartE2EDuration="2.71147321s" podCreationTimestamp="2026-02-03 06:39:50 +0000 UTC" firstStartedPulling="2026-02-03 06:39:51.762123858 +0000 UTC m=+2362.344815272" lastFinishedPulling="2026-02-03 06:39:51.948588277 +0000 UTC m=+2362.531279701" observedRunningTime="2026-02-03 06:39:52.709586886 +0000 UTC m=+2363.292278320" watchObservedRunningTime="2026-02-03 06:39:52.71147321 +0000 UTC m=+2363.294164624" Feb 03 06:39:55 crc kubenswrapper[4872]: I0203 06:39:55.122517 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:39:55 crc kubenswrapper[4872]: E0203 06:39:55.122954 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:39:56 crc kubenswrapper[4872]: I0203 06:39:56.987936 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:56 crc kubenswrapper[4872]: I0203 06:39:56.988418 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:57 crc kubenswrapper[4872]: I0203 06:39:57.045366 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:57 crc kubenswrapper[4872]: I0203 06:39:57.825511 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:39:57 crc kubenswrapper[4872]: I0203 06:39:57.882738 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7m9zv"] Feb 03 06:39:59 crc kubenswrapper[4872]: I0203 06:39:59.769357 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7m9zv" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerName="registry-server" containerID="cri-o://cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb" gracePeriod=2 Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.360910 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.424370 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-catalog-content\") pod \"c0be7748-42cb-4be0-94e9-afff78fa7750\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.424512 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2df8\" (UniqueName: \"kubernetes.io/projected/c0be7748-42cb-4be0-94e9-afff78fa7750-kube-api-access-b2df8\") pod \"c0be7748-42cb-4be0-94e9-afff78fa7750\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.424577 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-utilities\") pod \"c0be7748-42cb-4be0-94e9-afff78fa7750\" (UID: \"c0be7748-42cb-4be0-94e9-afff78fa7750\") " Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.427250 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-utilities" (OuterVolumeSpecName: "utilities") pod "c0be7748-42cb-4be0-94e9-afff78fa7750" (UID: "c0be7748-42cb-4be0-94e9-afff78fa7750"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.445434 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0be7748-42cb-4be0-94e9-afff78fa7750-kube-api-access-b2df8" (OuterVolumeSpecName: "kube-api-access-b2df8") pod "c0be7748-42cb-4be0-94e9-afff78fa7750" (UID: "c0be7748-42cb-4be0-94e9-afff78fa7750"). InnerVolumeSpecName "kube-api-access-b2df8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.479836 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0be7748-42cb-4be0-94e9-afff78fa7750" (UID: "c0be7748-42cb-4be0-94e9-afff78fa7750"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.526926 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.526957 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2df8\" (UniqueName: \"kubernetes.io/projected/c0be7748-42cb-4be0-94e9-afff78fa7750-kube-api-access-b2df8\") on node \"crc\" DevicePath \"\"" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.526968 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0be7748-42cb-4be0-94e9-afff78fa7750-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.782324 4872 generic.go:334] "Generic (PLEG): container finished" podID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerID="cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb" exitCode=0 Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.782379 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7m9zv" event={"ID":"c0be7748-42cb-4be0-94e9-afff78fa7750","Type":"ContainerDied","Data":"cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb"} Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.782414 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7m9zv" event={"ID":"c0be7748-42cb-4be0-94e9-afff78fa7750","Type":"ContainerDied","Data":"3c97f142c4ffca046498dba97a4f32f19130fafbbd426bbf140922a89d108f45"} Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.782438 4872 scope.go:117] "RemoveContainer" containerID="cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.782439 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7m9zv" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.814720 4872 scope.go:117] "RemoveContainer" containerID="e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.842021 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7m9zv"] Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.852532 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7m9zv"] Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.874110 4872 scope.go:117] "RemoveContainer" containerID="ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.899083 4872 scope.go:117] "RemoveContainer" containerID="cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb" Feb 03 06:40:00 crc kubenswrapper[4872]: E0203 06:40:00.899657 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb\": container with ID starting with cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb not found: ID does not exist" containerID="cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.899800 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb"} err="failed to get container status \"cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb\": rpc error: code = NotFound desc = could not find container \"cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb\": container with ID starting with cdaf77ea76f5ca402e64ccd6ca8d37612859b6b5aa1ee849a2caaca0a30a5beb not found: ID does not exist" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.899828 4872 scope.go:117] "RemoveContainer" containerID="e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04" Feb 03 06:40:00 crc kubenswrapper[4872]: E0203 06:40:00.901711 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04\": container with ID starting with e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04 not found: ID does not exist" containerID="e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.901746 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04"} err="failed to get container status \"e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04\": rpc error: code = NotFound desc = could not find container \"e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04\": container with ID starting with e1c6eb24634a0647ca0bc134cc95056a9a2a7027a700c7fe65e6ed8305c40c04 not found: ID does not exist" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.901771 4872 scope.go:117] "RemoveContainer" containerID="ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8" Feb 03 06:40:00 crc kubenswrapper[4872]: E0203 06:40:00.902070 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8\": container with ID starting with ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8 not found: ID does not exist" containerID="ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8" Feb 03 06:40:00 crc kubenswrapper[4872]: I0203 06:40:00.902100 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8"} err="failed to get container status \"ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8\": rpc error: code = NotFound desc = could not find container \"ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8\": container with ID starting with ee4efa1e678d9fcf5feb99d78967f2fedcdec95a673d7ec5f8ef18b65f92e0f8 not found: ID does not exist" Feb 03 06:40:02 crc kubenswrapper[4872]: I0203 06:40:02.134422 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" path="/var/lib/kubelet/pods/c0be7748-42cb-4be0-94e9-afff78fa7750/volumes" Feb 03 06:40:10 crc kubenswrapper[4872]: I0203 06:40:10.134053 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:40:10 crc kubenswrapper[4872]: E0203 06:40:10.134831 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:40:25 crc kubenswrapper[4872]: I0203 06:40:25.123792 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:40:25 crc kubenswrapper[4872]: E0203 06:40:25.124471 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:40:36 crc kubenswrapper[4872]: I0203 06:40:36.124047 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:40:36 crc kubenswrapper[4872]: E0203 06:40:36.125127 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:40:48 crc kubenswrapper[4872]: I0203 06:40:48.123635 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:40:48 crc kubenswrapper[4872]: E0203 06:40:48.125558 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:41:01 crc kubenswrapper[4872]: I0203 06:41:01.125003 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:41:01 crc kubenswrapper[4872]: E0203 06:41:01.126893 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:41:16 crc kubenswrapper[4872]: I0203 06:41:16.122660 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:41:16 crc kubenswrapper[4872]: E0203 06:41:16.123457 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:41:30 crc kubenswrapper[4872]: I0203 06:41:30.133515 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:41:30 crc kubenswrapper[4872]: E0203 06:41:30.134337 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:41:44 crc kubenswrapper[4872]: I0203 06:41:44.122747 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:41:45 crc kubenswrapper[4872]: I0203 06:41:45.148635 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"83587a82c83550a53d3d891250f16f169bd528c1a4785c410ab0572d8afa3cf9"} Feb 03 06:42:12 crc kubenswrapper[4872]: I0203 06:42:12.403184 4872 generic.go:334] "Generic (PLEG): container finished" podID="74a3f126-057b-4f44-9483-82e6a6a00c90" containerID="85fc37e4186679e43ac5661b0d5be2658e411dbbd3653e1493d4ffa89e3fbbc9" exitCode=0 Feb 03 06:42:12 crc kubenswrapper[4872]: I0203 06:42:12.403226 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" event={"ID":"74a3f126-057b-4f44-9483-82e6a6a00c90","Type":"ContainerDied","Data":"85fc37e4186679e43ac5661b0d5be2658e411dbbd3653e1493d4ffa89e3fbbc9"} Feb 03 06:42:13 crc kubenswrapper[4872]: I0203 06:42:13.901232 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044071 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-extra-config-0\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044161 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-1\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044188 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-0\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044234 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-ssh-key-openstack-edpm-ipam\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044259 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-0\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044386 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-combined-ca-bundle\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044407 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-inventory\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044423 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmmfp\" (UniqueName: \"kubernetes.io/projected/74a3f126-057b-4f44-9483-82e6a6a00c90-kube-api-access-cmmfp\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.044474 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-1\") pod \"74a3f126-057b-4f44-9483-82e6a6a00c90\" (UID: \"74a3f126-057b-4f44-9483-82e6a6a00c90\") " Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.049715 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.056177 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a3f126-057b-4f44-9483-82e6a6a00c90-kube-api-access-cmmfp" (OuterVolumeSpecName: "kube-api-access-cmmfp") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "kube-api-access-cmmfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.075480 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.083996 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.085371 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.096634 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-inventory" (OuterVolumeSpecName: "inventory") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.098211 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.103588 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.106890 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "74a3f126-057b-4f44-9483-82e6a6a00c90" (UID: "74a3f126-057b-4f44-9483-82e6a6a00c90"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.146955 4872 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.147255 4872 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.147385 4872 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.147539 4872 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.147707 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.147861 4872 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.147973 4872 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.148083 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74a3f126-057b-4f44-9483-82e6a6a00c90-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.148191 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmmfp\" (UniqueName: \"kubernetes.io/projected/74a3f126-057b-4f44-9483-82e6a6a00c90-kube-api-access-cmmfp\") on node \"crc\" DevicePath \"\"" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.426275 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" event={"ID":"74a3f126-057b-4f44-9483-82e6a6a00c90","Type":"ContainerDied","Data":"b69181dbc0b3c68833ba6da90d627c4c973069cd3151c3adcbe57939e0e3b12f"} Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.426310 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69181dbc0b3c68833ba6da90d627c4c973069cd3151c3adcbe57939e0e3b12f" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.426331 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-m5jcw" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.618007 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl"] Feb 03 06:42:14 crc kubenswrapper[4872]: E0203 06:42:14.618829 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a3f126-057b-4f44-9483-82e6a6a00c90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.618946 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a3f126-057b-4f44-9483-82e6a6a00c90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 06:42:14 crc kubenswrapper[4872]: E0203 06:42:14.619043 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerName="registry-server" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.619137 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerName="registry-server" Feb 03 06:42:14 crc kubenswrapper[4872]: E0203 06:42:14.619242 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerName="extract-utilities" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.619311 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerName="extract-utilities" Feb 03 06:42:14 crc kubenswrapper[4872]: E0203 06:42:14.619401 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerName="extract-content" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.619494 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerName="extract-content" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.619830 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a3f126-057b-4f44-9483-82e6a6a00c90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.619936 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0be7748-42cb-4be0-94e9-afff78fa7750" containerName="registry-server" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.620771 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.623364 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.624240 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f2kkl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.624266 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.624419 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.625661 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.633108 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl"] Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.759799 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r599w\" (UniqueName: \"kubernetes.io/projected/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-kube-api-access-r599w\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.759862 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.759918 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.760062 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.760130 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.760168 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.760220 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.861911 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.862025 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.862070 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.862147 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.862841 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r599w\" (UniqueName: \"kubernetes.io/projected/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-kube-api-access-r599w\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.862888 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.862946 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.866317 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.866995 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.867406 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.868296 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.871141 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.871426 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.885088 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r599w\" (UniqueName: \"kubernetes.io/projected/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-kube-api-access-r599w\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:14 crc kubenswrapper[4872]: I0203 06:42:14.947544 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:42:15 crc kubenswrapper[4872]: I0203 06:42:15.577361 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:42:15 crc kubenswrapper[4872]: I0203 06:42:15.590967 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl"] Feb 03 06:42:16 crc kubenswrapper[4872]: I0203 06:42:16.445280 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" event={"ID":"a56903a8-61f2-433b-9ab7-2f96b9f8d15f","Type":"ContainerStarted","Data":"56c454d0c6cfca996559fd3140e21182d3f290999958678db42edfc90163782f"} Feb 03 06:42:16 crc kubenswrapper[4872]: I0203 06:42:16.445641 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" event={"ID":"a56903a8-61f2-433b-9ab7-2f96b9f8d15f","Type":"ContainerStarted","Data":"b2a56346c517cfe09550f647225fa122d5be7a79fc0588a5c9a14da41851cd46"} Feb 03 06:42:16 crc kubenswrapper[4872]: I0203 06:42:16.473479 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" podStartSLOduration=2.282240687 podStartE2EDuration="2.473460975s" podCreationTimestamp="2026-02-03 06:42:14 +0000 UTC" firstStartedPulling="2026-02-03 06:42:15.577103872 +0000 UTC m=+2506.159795286" lastFinishedPulling="2026-02-03 06:42:15.76832416 +0000 UTC m=+2506.351015574" observedRunningTime="2026-02-03 06:42:16.471377507 +0000 UTC m=+2507.054068921" watchObservedRunningTime="2026-02-03 06:42:16.473460975 +0000 UTC m=+2507.056152389" Feb 03 06:44:01 crc kubenswrapper[4872]: I0203 06:44:01.271236 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:44:01 crc kubenswrapper[4872]: I0203 06:44:01.273410 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:44:31 crc kubenswrapper[4872]: I0203 06:44:31.271501 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:44:31 crc kubenswrapper[4872]: I0203 06:44:31.272183 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.156798 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v"] Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.158452 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.161952 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.162249 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.227124 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v"] Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.308622 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9394df01-098e-42e4-8f6e-47e156fc07ad-secret-volume\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.309148 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9394df01-098e-42e4-8f6e-47e156fc07ad-config-volume\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.309199 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nj6w\" (UniqueName: \"kubernetes.io/projected/9394df01-098e-42e4-8f6e-47e156fc07ad-kube-api-access-4nj6w\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.410623 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9394df01-098e-42e4-8f6e-47e156fc07ad-secret-volume\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.410769 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9394df01-098e-42e4-8f6e-47e156fc07ad-config-volume\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.410826 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nj6w\" (UniqueName: \"kubernetes.io/projected/9394df01-098e-42e4-8f6e-47e156fc07ad-kube-api-access-4nj6w\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.411756 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9394df01-098e-42e4-8f6e-47e156fc07ad-config-volume\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.416946 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9394df01-098e-42e4-8f6e-47e156fc07ad-secret-volume\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.431002 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nj6w\" (UniqueName: \"kubernetes.io/projected/9394df01-098e-42e4-8f6e-47e156fc07ad-kube-api-access-4nj6w\") pod \"collect-profiles-29501685-8vq6v\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.485365 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:00 crc kubenswrapper[4872]: I0203 06:45:00.955838 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v"] Feb 03 06:45:00 crc kubenswrapper[4872]: W0203 06:45:00.973398 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9394df01_098e_42e4_8f6e_47e156fc07ad.slice/crio-9b3d0442b86756ddbc48f43f4528cb9c8ad33174ae9cecaab213c3bbc267cbe3 WatchSource:0}: Error finding container 9b3d0442b86756ddbc48f43f4528cb9c8ad33174ae9cecaab213c3bbc267cbe3: Status 404 returned error can't find the container with id 9b3d0442b86756ddbc48f43f4528cb9c8ad33174ae9cecaab213c3bbc267cbe3 Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.275718 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.276064 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.276114 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.277011 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83587a82c83550a53d3d891250f16f169bd528c1a4785c410ab0572d8afa3cf9"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.277070 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://83587a82c83550a53d3d891250f16f169bd528c1a4785c410ab0572d8afa3cf9" gracePeriod=600 Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.757816 4872 generic.go:334] "Generic (PLEG): container finished" podID="9394df01-098e-42e4-8f6e-47e156fc07ad" containerID="6c601bee0bfac32dd567c336058cbea37ce7cf969e3614383f367669283b74f4" exitCode=0 Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.757921 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" event={"ID":"9394df01-098e-42e4-8f6e-47e156fc07ad","Type":"ContainerDied","Data":"6c601bee0bfac32dd567c336058cbea37ce7cf969e3614383f367669283b74f4"} Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.758099 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" event={"ID":"9394df01-098e-42e4-8f6e-47e156fc07ad","Type":"ContainerStarted","Data":"9b3d0442b86756ddbc48f43f4528cb9c8ad33174ae9cecaab213c3bbc267cbe3"} Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.760798 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="83587a82c83550a53d3d891250f16f169bd528c1a4785c410ab0572d8afa3cf9" exitCode=0 Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.760847 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"83587a82c83550a53d3d891250f16f169bd528c1a4785c410ab0572d8afa3cf9"} Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.760884 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a"} Feb 03 06:45:01 crc kubenswrapper[4872]: I0203 06:45:01.760904 4872 scope.go:117] "RemoveContainer" containerID="b9b58247135e08e137cf2b201d147825a824785db0c823f7f264f022f7e68988" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.101407 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.283442 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nj6w\" (UniqueName: \"kubernetes.io/projected/9394df01-098e-42e4-8f6e-47e156fc07ad-kube-api-access-4nj6w\") pod \"9394df01-098e-42e4-8f6e-47e156fc07ad\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.283705 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9394df01-098e-42e4-8f6e-47e156fc07ad-secret-volume\") pod \"9394df01-098e-42e4-8f6e-47e156fc07ad\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.283768 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9394df01-098e-42e4-8f6e-47e156fc07ad-config-volume\") pod \"9394df01-098e-42e4-8f6e-47e156fc07ad\" (UID: \"9394df01-098e-42e4-8f6e-47e156fc07ad\") " Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.284562 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9394df01-098e-42e4-8f6e-47e156fc07ad-config-volume" (OuterVolumeSpecName: "config-volume") pod "9394df01-098e-42e4-8f6e-47e156fc07ad" (UID: "9394df01-098e-42e4-8f6e-47e156fc07ad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.290789 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9394df01-098e-42e4-8f6e-47e156fc07ad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9394df01-098e-42e4-8f6e-47e156fc07ad" (UID: "9394df01-098e-42e4-8f6e-47e156fc07ad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.291035 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9394df01-098e-42e4-8f6e-47e156fc07ad-kube-api-access-4nj6w" (OuterVolumeSpecName: "kube-api-access-4nj6w") pod "9394df01-098e-42e4-8f6e-47e156fc07ad" (UID: "9394df01-098e-42e4-8f6e-47e156fc07ad"). InnerVolumeSpecName "kube-api-access-4nj6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.386539 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9394df01-098e-42e4-8f6e-47e156fc07ad-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.386802 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9394df01-098e-42e4-8f6e-47e156fc07ad-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.386883 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nj6w\" (UniqueName: \"kubernetes.io/projected/9394df01-098e-42e4-8f6e-47e156fc07ad-kube-api-access-4nj6w\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.787467 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" event={"ID":"9394df01-098e-42e4-8f6e-47e156fc07ad","Type":"ContainerDied","Data":"9b3d0442b86756ddbc48f43f4528cb9c8ad33174ae9cecaab213c3bbc267cbe3"} Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.788111 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3d0442b86756ddbc48f43f4528cb9c8ad33174ae9cecaab213c3bbc267cbe3" Feb 03 06:45:03 crc kubenswrapper[4872]: I0203 06:45:03.787553 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v" Feb 03 06:45:04 crc kubenswrapper[4872]: I0203 06:45:04.198207 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt"] Feb 03 06:45:04 crc kubenswrapper[4872]: I0203 06:45:04.211311 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501640-r5tqt"] Feb 03 06:45:06 crc kubenswrapper[4872]: I0203 06:45:06.137082 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28287dc3-2b46-498f-9972-5a861374f4d5" path="/var/lib/kubelet/pods/28287dc3-2b46-498f-9972-5a861374f4d5/volumes" Feb 03 06:45:27 crc kubenswrapper[4872]: I0203 06:45:27.023022 4872 generic.go:334] "Generic (PLEG): container finished" podID="a56903a8-61f2-433b-9ab7-2f96b9f8d15f" containerID="56c454d0c6cfca996559fd3140e21182d3f290999958678db42edfc90163782f" exitCode=0 Feb 03 06:45:27 crc kubenswrapper[4872]: I0203 06:45:27.023505 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" event={"ID":"a56903a8-61f2-433b-9ab7-2f96b9f8d15f","Type":"ContainerDied","Data":"56c454d0c6cfca996559fd3140e21182d3f290999958678db42edfc90163782f"} Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.434247 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.588715 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-telemetry-combined-ca-bundle\") pod \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.588774 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-0\") pod \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.588849 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-1\") pod \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.589159 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ssh-key-openstack-edpm-ipam\") pod \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.589221 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r599w\" (UniqueName: \"kubernetes.io/projected/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-kube-api-access-r599w\") pod \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.589304 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-inventory\") pod \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.589382 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-2\") pod \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\" (UID: \"a56903a8-61f2-433b-9ab7-2f96b9f8d15f\") " Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.611765 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-kube-api-access-r599w" (OuterVolumeSpecName: "kube-api-access-r599w") pod "a56903a8-61f2-433b-9ab7-2f96b9f8d15f" (UID: "a56903a8-61f2-433b-9ab7-2f96b9f8d15f"). InnerVolumeSpecName "kube-api-access-r599w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.617780 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a56903a8-61f2-433b-9ab7-2f96b9f8d15f" (UID: "a56903a8-61f2-433b-9ab7-2f96b9f8d15f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.619400 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "a56903a8-61f2-433b-9ab7-2f96b9f8d15f" (UID: "a56903a8-61f2-433b-9ab7-2f96b9f8d15f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.624051 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-inventory" (OuterVolumeSpecName: "inventory") pod "a56903a8-61f2-433b-9ab7-2f96b9f8d15f" (UID: "a56903a8-61f2-433b-9ab7-2f96b9f8d15f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.625500 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "a56903a8-61f2-433b-9ab7-2f96b9f8d15f" (UID: "a56903a8-61f2-433b-9ab7-2f96b9f8d15f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.643221 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "a56903a8-61f2-433b-9ab7-2f96b9f8d15f" (UID: "a56903a8-61f2-433b-9ab7-2f96b9f8d15f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.645610 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a56903a8-61f2-433b-9ab7-2f96b9f8d15f" (UID: "a56903a8-61f2-433b-9ab7-2f96b9f8d15f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.691931 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.692128 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r599w\" (UniqueName: \"kubernetes.io/projected/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-kube-api-access-r599w\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.692144 4872 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.692156 4872 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.692170 4872 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.692184 4872 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:28 crc kubenswrapper[4872]: I0203 06:45:28.692196 4872 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a56903a8-61f2-433b-9ab7-2f96b9f8d15f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 03 06:45:29 crc kubenswrapper[4872]: I0203 06:45:29.038984 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" event={"ID":"a56903a8-61f2-433b-9ab7-2f96b9f8d15f","Type":"ContainerDied","Data":"b2a56346c517cfe09550f647225fa122d5be7a79fc0588a5c9a14da41851cd46"} Feb 03 06:45:29 crc kubenswrapper[4872]: I0203 06:45:29.039317 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2a56346c517cfe09550f647225fa122d5be7a79fc0588a5c9a14da41851cd46" Feb 03 06:45:29 crc kubenswrapper[4872]: I0203 06:45:29.039047 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl" Feb 03 06:45:58 crc kubenswrapper[4872]: I0203 06:45:58.171434 4872 scope.go:117] "RemoveContainer" containerID="1bbec60945d12e8a1813af936b5cc128adbb873ebb1a5d267ee0ee3730317ca3" Feb 03 06:45:58 crc kubenswrapper[4872]: I0203 06:45:58.208214 4872 scope.go:117] "RemoveContainer" containerID="4c63465b9759f99252eefa7aa3cf8b47dd0ea211173751d9a112c8299f41db3b" Feb 03 06:45:58 crc kubenswrapper[4872]: I0203 06:45:58.279637 4872 scope.go:117] "RemoveContainer" containerID="fd944d34b0d7ee911945b94fd84a9ea0831c4f651daa5e2c1d29975a4d16163f" Feb 03 06:45:58 crc kubenswrapper[4872]: I0203 06:45:58.315931 4872 scope.go:117] "RemoveContainer" containerID="71573c6f47cb19c1db4238c739d64dd889e196af1b0bb4c307ceb745f03021df" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.243837 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 06:46:22 crc kubenswrapper[4872]: E0203 06:46:22.245401 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56903a8-61f2-433b-9ab7-2f96b9f8d15f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.245430 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56903a8-61f2-433b-9ab7-2f96b9f8d15f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 06:46:22 crc kubenswrapper[4872]: E0203 06:46:22.245488 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9394df01-098e-42e4-8f6e-47e156fc07ad" containerName="collect-profiles" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.245503 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9394df01-098e-42e4-8f6e-47e156fc07ad" containerName="collect-profiles" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.245888 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9394df01-098e-42e4-8f6e-47e156fc07ad" containerName="collect-profiles" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.245930 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56903a8-61f2-433b-9ab7-2f96b9f8d15f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.246940 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.249556 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.250004 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-n66vq" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.251590 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.260586 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.263438 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.415381 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.415977 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-przqz\" (UniqueName: \"kubernetes.io/projected/ab488c2c-7a02-4e73-8aaa-5e0197d51631-kube-api-access-przqz\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.416125 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-config-data\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.416266 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.416371 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.416501 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.416638 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.416937 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.417042 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.518796 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.519264 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-przqz\" (UniqueName: \"kubernetes.io/projected/ab488c2c-7a02-4e73-8aaa-5e0197d51631-kube-api-access-przqz\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.519457 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-config-data\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.519652 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.519888 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.520665 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.521854 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.522131 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.522317 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.521430 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-config-data\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.521021 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.520568 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.523230 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.523385 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.527910 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.532067 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.539808 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.542576 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-przqz\" (UniqueName: \"kubernetes.io/projected/ab488c2c-7a02-4e73-8aaa-5e0197d51631-kube-api-access-przqz\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.568741 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " pod="openstack/tempest-tests-tempest" Feb 03 06:46:22 crc kubenswrapper[4872]: I0203 06:46:22.592145 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 06:46:23 crc kubenswrapper[4872]: I0203 06:46:23.091121 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 06:46:23 crc kubenswrapper[4872]: I0203 06:46:23.935966 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ab488c2c-7a02-4e73-8aaa-5e0197d51631","Type":"ContainerStarted","Data":"e62e6e3cd2008fde1cc3856823bccdf4685b41d881aec65f5b04f9ce86c8f49b"} Feb 03 06:47:00 crc kubenswrapper[4872]: E0203 06:47:00.552963 4872 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 03 06:47:00 crc kubenswrapper[4872]: E0203 06:47:00.553853 4872 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-przqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(ab488c2c-7a02-4e73-8aaa-5e0197d51631): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 06:47:00 crc kubenswrapper[4872]: E0203 06:47:00.555041 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="ab488c2c-7a02-4e73-8aaa-5e0197d51631" Feb 03 06:47:01 crc kubenswrapper[4872]: I0203 06:47:01.271362 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:47:01 crc kubenswrapper[4872]: I0203 06:47:01.271448 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:47:01 crc kubenswrapper[4872]: E0203 06:47:01.337172 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="ab488c2c-7a02-4e73-8aaa-5e0197d51631" Feb 03 06:47:16 crc kubenswrapper[4872]: I0203 06:47:16.126179 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:47:16 crc kubenswrapper[4872]: I0203 06:47:16.600828 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 06:47:17 crc kubenswrapper[4872]: I0203 06:47:17.997263 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kbgxm"] Feb 03 06:47:17 crc kubenswrapper[4872]: I0203 06:47:17.999849 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.012979 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbgxm"] Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.037238 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdg4z\" (UniqueName: \"kubernetes.io/projected/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-kube-api-access-zdg4z\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.037285 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-utilities\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.037335 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-catalog-content\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.139347 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdg4z\" (UniqueName: \"kubernetes.io/projected/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-kube-api-access-zdg4z\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.139390 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-utilities\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.139427 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-catalog-content\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.139971 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-utilities\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.139980 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-catalog-content\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.157871 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdg4z\" (UniqueName: \"kubernetes.io/projected/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-kube-api-access-zdg4z\") pod \"redhat-marketplace-kbgxm\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.330753 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.525405 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ab488c2c-7a02-4e73-8aaa-5e0197d51631","Type":"ContainerStarted","Data":"510d481f2cfb71247d9a9be8adccbf186d5592926ea0fe9a7840b8cf27a7805c"} Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.566437 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.08235237 podStartE2EDuration="57.566421749s" podCreationTimestamp="2026-02-03 06:46:21 +0000 UTC" firstStartedPulling="2026-02-03 06:46:23.111673461 +0000 UTC m=+2753.694364885" lastFinishedPulling="2026-02-03 06:47:16.59574285 +0000 UTC m=+2807.178434264" observedRunningTime="2026-02-03 06:47:18.555033475 +0000 UTC m=+2809.137724889" watchObservedRunningTime="2026-02-03 06:47:18.566421749 +0000 UTC m=+2809.149113153" Feb 03 06:47:18 crc kubenswrapper[4872]: W0203 06:47:18.808016 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c5cf0a6_0b7a_4a6d_88ac_debf45311c98.slice/crio-66d3611b4c84a0d5b03cbd893ed769b79f7da44a6df1430dc21389de931dc166 WatchSource:0}: Error finding container 66d3611b4c84a0d5b03cbd893ed769b79f7da44a6df1430dc21389de931dc166: Status 404 returned error can't find the container with id 66d3611b4c84a0d5b03cbd893ed769b79f7da44a6df1430dc21389de931dc166 Feb 03 06:47:18 crc kubenswrapper[4872]: I0203 06:47:18.809402 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbgxm"] Feb 03 06:47:19 crc kubenswrapper[4872]: I0203 06:47:19.540253 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbgxm" event={"ID":"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98","Type":"ContainerStarted","Data":"66d3611b4c84a0d5b03cbd893ed769b79f7da44a6df1430dc21389de931dc166"} Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.407235 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2rw8m"] Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.410074 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.426005 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rw8m"] Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.484968 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psql4\" (UniqueName: \"kubernetes.io/projected/9bb35678-9696-4128-a9f8-8496102c6032-kube-api-access-psql4\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.485061 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-catalog-content\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.485212 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-utilities\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.551151 4872 generic.go:334] "Generic (PLEG): container finished" podID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerID="eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c" exitCode=0 Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.551256 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbgxm" event={"ID":"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98","Type":"ContainerDied","Data":"eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c"} Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.587137 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psql4\" (UniqueName: \"kubernetes.io/projected/9bb35678-9696-4128-a9f8-8496102c6032-kube-api-access-psql4\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.587222 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-catalog-content\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.587343 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-utilities\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.587902 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-utilities\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.588184 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-catalog-content\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.614916 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psql4\" (UniqueName: \"kubernetes.io/projected/9bb35678-9696-4128-a9f8-8496102c6032-kube-api-access-psql4\") pod \"redhat-operators-2rw8m\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:20 crc kubenswrapper[4872]: I0203 06:47:20.741906 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:21 crc kubenswrapper[4872]: I0203 06:47:21.259820 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rw8m"] Feb 03 06:47:21 crc kubenswrapper[4872]: I0203 06:47:21.569587 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbgxm" event={"ID":"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98","Type":"ContainerStarted","Data":"3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8"} Feb 03 06:47:21 crc kubenswrapper[4872]: I0203 06:47:21.576840 4872 generic.go:334] "Generic (PLEG): container finished" podID="9bb35678-9696-4128-a9f8-8496102c6032" containerID="ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96" exitCode=0 Feb 03 06:47:21 crc kubenswrapper[4872]: I0203 06:47:21.576890 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rw8m" event={"ID":"9bb35678-9696-4128-a9f8-8496102c6032","Type":"ContainerDied","Data":"ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96"} Feb 03 06:47:21 crc kubenswrapper[4872]: I0203 06:47:21.576922 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rw8m" event={"ID":"9bb35678-9696-4128-a9f8-8496102c6032","Type":"ContainerStarted","Data":"dad947b7ee7ef12e0cd88cc90c39e0ceae851d098b700185a239202240f94ef7"} Feb 03 06:47:22 crc kubenswrapper[4872]: I0203 06:47:22.592763 4872 generic.go:334] "Generic (PLEG): container finished" podID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerID="3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8" exitCode=0 Feb 03 06:47:22 crc kubenswrapper[4872]: I0203 06:47:22.593061 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbgxm" event={"ID":"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98","Type":"ContainerDied","Data":"3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8"} Feb 03 06:47:22 crc kubenswrapper[4872]: I0203 06:47:22.611707 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rw8m" event={"ID":"9bb35678-9696-4128-a9f8-8496102c6032","Type":"ContainerStarted","Data":"66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870"} Feb 03 06:47:24 crc kubenswrapper[4872]: I0203 06:47:24.632580 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbgxm" event={"ID":"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98","Type":"ContainerStarted","Data":"a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1"} Feb 03 06:47:24 crc kubenswrapper[4872]: I0203 06:47:24.656570 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kbgxm" podStartSLOduration=4.7881777119999995 podStartE2EDuration="7.656545441s" podCreationTimestamp="2026-02-03 06:47:17 +0000 UTC" firstStartedPulling="2026-02-03 06:47:20.553106715 +0000 UTC m=+2811.135798129" lastFinishedPulling="2026-02-03 06:47:23.421474444 +0000 UTC m=+2814.004165858" observedRunningTime="2026-02-03 06:47:24.647878682 +0000 UTC m=+2815.230570096" watchObservedRunningTime="2026-02-03 06:47:24.656545441 +0000 UTC m=+2815.239236875" Feb 03 06:47:27 crc kubenswrapper[4872]: I0203 06:47:27.668365 4872 generic.go:334] "Generic (PLEG): container finished" podID="9bb35678-9696-4128-a9f8-8496102c6032" containerID="66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870" exitCode=0 Feb 03 06:47:27 crc kubenswrapper[4872]: I0203 06:47:27.668444 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rw8m" event={"ID":"9bb35678-9696-4128-a9f8-8496102c6032","Type":"ContainerDied","Data":"66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870"} Feb 03 06:47:28 crc kubenswrapper[4872]: I0203 06:47:28.335643 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:28 crc kubenswrapper[4872]: I0203 06:47:28.335939 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:28 crc kubenswrapper[4872]: I0203 06:47:28.394700 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:28 crc kubenswrapper[4872]: I0203 06:47:28.679544 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rw8m" event={"ID":"9bb35678-9696-4128-a9f8-8496102c6032","Type":"ContainerStarted","Data":"c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b"} Feb 03 06:47:28 crc kubenswrapper[4872]: I0203 06:47:28.711671 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2rw8m" podStartSLOduration=2.196183923 podStartE2EDuration="8.711652311s" podCreationTimestamp="2026-02-03 06:47:20 +0000 UTC" firstStartedPulling="2026-02-03 06:47:21.581911773 +0000 UTC m=+2812.164603177" lastFinishedPulling="2026-02-03 06:47:28.097380151 +0000 UTC m=+2818.680071565" observedRunningTime="2026-02-03 06:47:28.699399886 +0000 UTC m=+2819.282091300" watchObservedRunningTime="2026-02-03 06:47:28.711652311 +0000 UTC m=+2819.294343725" Feb 03 06:47:28 crc kubenswrapper[4872]: I0203 06:47:28.722264 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:29 crc kubenswrapper[4872]: I0203 06:47:29.984730 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbgxm"] Feb 03 06:47:30 crc kubenswrapper[4872]: I0203 06:47:30.697467 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kbgxm" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerName="registry-server" containerID="cri-o://a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1" gracePeriod=2 Feb 03 06:47:30 crc kubenswrapper[4872]: I0203 06:47:30.743611 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:30 crc kubenswrapper[4872]: I0203 06:47:30.745285 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.169491 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.201616 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdg4z\" (UniqueName: \"kubernetes.io/projected/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-kube-api-access-zdg4z\") pod \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.201788 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-catalog-content\") pod \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.201840 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-utilities\") pod \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\" (UID: \"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98\") " Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.202886 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-utilities" (OuterVolumeSpecName: "utilities") pod "2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" (UID: "2c5cf0a6-0b7a-4a6d-88ac-debf45311c98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.214613 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-kube-api-access-zdg4z" (OuterVolumeSpecName: "kube-api-access-zdg4z") pod "2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" (UID: "2c5cf0a6-0b7a-4a6d-88ac-debf45311c98"). InnerVolumeSpecName "kube-api-access-zdg4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.226532 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" (UID: "2c5cf0a6-0b7a-4a6d-88ac-debf45311c98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.271508 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.271571 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.304315 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.304348 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdg4z\" (UniqueName: \"kubernetes.io/projected/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-kube-api-access-zdg4z\") on node \"crc\" DevicePath \"\"" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.304361 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.709250 4872 generic.go:334] "Generic (PLEG): container finished" podID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerID="a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1" exitCode=0 Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.709298 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbgxm" event={"ID":"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98","Type":"ContainerDied","Data":"a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1"} Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.709325 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbgxm" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.709360 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbgxm" event={"ID":"2c5cf0a6-0b7a-4a6d-88ac-debf45311c98","Type":"ContainerDied","Data":"66d3611b4c84a0d5b03cbd893ed769b79f7da44a6df1430dc21389de931dc166"} Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.709376 4872 scope.go:117] "RemoveContainer" containerID="a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.736883 4872 scope.go:117] "RemoveContainer" containerID="3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.769320 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbgxm"] Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.772830 4872 scope.go:117] "RemoveContainer" containerID="eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.776207 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbgxm"] Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.802226 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2rw8m" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="registry-server" probeResult="failure" output=< Feb 03 06:47:31 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:47:31 crc kubenswrapper[4872]: > Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.838707 4872 scope.go:117] "RemoveContainer" containerID="a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1" Feb 03 06:47:31 crc kubenswrapper[4872]: E0203 06:47:31.849411 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1\": container with ID starting with a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1 not found: ID does not exist" containerID="a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.851214 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1"} err="failed to get container status \"a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1\": rpc error: code = NotFound desc = could not find container \"a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1\": container with ID starting with a880ef5b0ef7223c291aacc6423ca5c142e68feff17a77815eec9079982625c1 not found: ID does not exist" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.851249 4872 scope.go:117] "RemoveContainer" containerID="3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8" Feb 03 06:47:31 crc kubenswrapper[4872]: E0203 06:47:31.853896 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8\": container with ID starting with 3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8 not found: ID does not exist" containerID="3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.853931 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8"} err="failed to get container status \"3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8\": rpc error: code = NotFound desc = could not find container \"3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8\": container with ID starting with 3a880c0326f143accfa86fcffcbe4c99dd5d8e08269ee46667a68e8a943b26a8 not found: ID does not exist" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.853946 4872 scope.go:117] "RemoveContainer" containerID="eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c" Feb 03 06:47:31 crc kubenswrapper[4872]: E0203 06:47:31.854250 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c\": container with ID starting with eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c not found: ID does not exist" containerID="eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c" Feb 03 06:47:31 crc kubenswrapper[4872]: I0203 06:47:31.854275 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c"} err="failed to get container status \"eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c\": rpc error: code = NotFound desc = could not find container \"eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c\": container with ID starting with eda18948574c0ac1ae039d7bf338b3cd118345ce8b135ea0d13f1a60713c9e7c not found: ID does not exist" Feb 03 06:47:32 crc kubenswrapper[4872]: I0203 06:47:32.134762 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" path="/var/lib/kubelet/pods/2c5cf0a6-0b7a-4a6d-88ac-debf45311c98/volumes" Feb 03 06:47:40 crc kubenswrapper[4872]: I0203 06:47:40.809558 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:40 crc kubenswrapper[4872]: I0203 06:47:40.907854 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:41 crc kubenswrapper[4872]: I0203 06:47:41.066792 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rw8m"] Feb 03 06:47:42 crc kubenswrapper[4872]: I0203 06:47:42.815385 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2rw8m" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="registry-server" containerID="cri-o://c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b" gracePeriod=2 Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.334541 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.350192 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psql4\" (UniqueName: \"kubernetes.io/projected/9bb35678-9696-4128-a9f8-8496102c6032-kube-api-access-psql4\") pod \"9bb35678-9696-4128-a9f8-8496102c6032\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.350276 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-catalog-content\") pod \"9bb35678-9696-4128-a9f8-8496102c6032\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.350571 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-utilities\") pod \"9bb35678-9696-4128-a9f8-8496102c6032\" (UID: \"9bb35678-9696-4128-a9f8-8496102c6032\") " Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.351915 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-utilities" (OuterVolumeSpecName: "utilities") pod "9bb35678-9696-4128-a9f8-8496102c6032" (UID: "9bb35678-9696-4128-a9f8-8496102c6032"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.353135 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.369882 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb35678-9696-4128-a9f8-8496102c6032-kube-api-access-psql4" (OuterVolumeSpecName: "kube-api-access-psql4") pod "9bb35678-9696-4128-a9f8-8496102c6032" (UID: "9bb35678-9696-4128-a9f8-8496102c6032"). InnerVolumeSpecName "kube-api-access-psql4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.454668 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psql4\" (UniqueName: \"kubernetes.io/projected/9bb35678-9696-4128-a9f8-8496102c6032-kube-api-access-psql4\") on node \"crc\" DevicePath \"\"" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.498306 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bb35678-9696-4128-a9f8-8496102c6032" (UID: "9bb35678-9696-4128-a9f8-8496102c6032"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.557722 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bb35678-9696-4128-a9f8-8496102c6032-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.823951 4872 generic.go:334] "Generic (PLEG): container finished" podID="9bb35678-9696-4128-a9f8-8496102c6032" containerID="c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b" exitCode=0 Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.824018 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rw8m" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.824063 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rw8m" event={"ID":"9bb35678-9696-4128-a9f8-8496102c6032","Type":"ContainerDied","Data":"c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b"} Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.825188 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rw8m" event={"ID":"9bb35678-9696-4128-a9f8-8496102c6032","Type":"ContainerDied","Data":"dad947b7ee7ef12e0cd88cc90c39e0ceae851d098b700185a239202240f94ef7"} Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.825226 4872 scope.go:117] "RemoveContainer" containerID="c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.846886 4872 scope.go:117] "RemoveContainer" containerID="66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.877219 4872 scope.go:117] "RemoveContainer" containerID="ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.932760 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rw8m"] Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.942480 4872 scope.go:117] "RemoveContainer" containerID="c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b" Feb 03 06:47:43 crc kubenswrapper[4872]: E0203 06:47:43.944038 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b\": container with ID starting with c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b not found: ID does not exist" containerID="c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.944094 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b"} err="failed to get container status \"c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b\": rpc error: code = NotFound desc = could not find container \"c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b\": container with ID starting with c65167eff8a04eb53d7df8215e707e24fe7a99bacb45d5012f38ad3b54da833b not found: ID does not exist" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.944124 4872 scope.go:117] "RemoveContainer" containerID="66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870" Feb 03 06:47:43 crc kubenswrapper[4872]: E0203 06:47:43.944629 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870\": container with ID starting with 66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870 not found: ID does not exist" containerID="66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.944659 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870"} err="failed to get container status \"66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870\": rpc error: code = NotFound desc = could not find container \"66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870\": container with ID starting with 66f4e29842bb774a7a0d4f5cf8f8dc291ec88c54fdf411089b68d29921d43870 not found: ID does not exist" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.944678 4872 scope.go:117] "RemoveContainer" containerID="ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96" Feb 03 06:47:43 crc kubenswrapper[4872]: E0203 06:47:43.945308 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96\": container with ID starting with ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96 not found: ID does not exist" containerID="ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.945366 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96"} err="failed to get container status \"ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96\": rpc error: code = NotFound desc = could not find container \"ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96\": container with ID starting with ca185abe9afc9a37d35359f05bedd8028b170ab5219c3446455fff75f5072e96 not found: ID does not exist" Feb 03 06:47:43 crc kubenswrapper[4872]: I0203 06:47:43.961194 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2rw8m"] Feb 03 06:47:44 crc kubenswrapper[4872]: I0203 06:47:44.136935 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb35678-9696-4128-a9f8-8496102c6032" path="/var/lib/kubelet/pods/9bb35678-9696-4128-a9f8-8496102c6032/volumes" Feb 03 06:48:01 crc kubenswrapper[4872]: I0203 06:48:01.271203 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:48:01 crc kubenswrapper[4872]: I0203 06:48:01.271785 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:48:01 crc kubenswrapper[4872]: I0203 06:48:01.271828 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:48:01 crc kubenswrapper[4872]: I0203 06:48:01.272491 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:48:01 crc kubenswrapper[4872]: I0203 06:48:01.272534 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" gracePeriod=600 Feb 03 06:48:01 crc kubenswrapper[4872]: E0203 06:48:01.401678 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:48:02 crc kubenswrapper[4872]: I0203 06:48:02.003709 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" exitCode=0 Feb 03 06:48:02 crc kubenswrapper[4872]: I0203 06:48:02.003779 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a"} Feb 03 06:48:02 crc kubenswrapper[4872]: I0203 06:48:02.004067 4872 scope.go:117] "RemoveContainer" containerID="83587a82c83550a53d3d891250f16f169bd528c1a4785c410ab0572d8afa3cf9" Feb 03 06:48:02 crc kubenswrapper[4872]: I0203 06:48:02.004962 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:48:02 crc kubenswrapper[4872]: E0203 06:48:02.005402 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:48:14 crc kubenswrapper[4872]: I0203 06:48:14.124660 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:48:14 crc kubenswrapper[4872]: E0203 06:48:14.125605 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:48:26 crc kubenswrapper[4872]: I0203 06:48:26.123396 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:48:26 crc kubenswrapper[4872]: E0203 06:48:26.124534 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:48:41 crc kubenswrapper[4872]: I0203 06:48:41.123538 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:48:41 crc kubenswrapper[4872]: E0203 06:48:41.124117 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:48:56 crc kubenswrapper[4872]: I0203 06:48:56.128874 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:48:56 crc kubenswrapper[4872]: E0203 06:48:56.129954 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.472264 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xtr7g"] Feb 03 06:49:07 crc kubenswrapper[4872]: E0203 06:49:07.473104 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerName="extract-utilities" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.473116 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerName="extract-utilities" Feb 03 06:49:07 crc kubenswrapper[4872]: E0203 06:49:07.473126 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="registry-server" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.473132 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="registry-server" Feb 03 06:49:07 crc kubenswrapper[4872]: E0203 06:49:07.473142 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="extract-content" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.473148 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="extract-content" Feb 03 06:49:07 crc kubenswrapper[4872]: E0203 06:49:07.473160 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerName="extract-content" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.473166 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerName="extract-content" Feb 03 06:49:07 crc kubenswrapper[4872]: E0203 06:49:07.473191 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="extract-utilities" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.473198 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="extract-utilities" Feb 03 06:49:07 crc kubenswrapper[4872]: E0203 06:49:07.473213 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerName="registry-server" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.473219 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerName="registry-server" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.473364 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5cf0a6-0b7a-4a6d-88ac-debf45311c98" containerName="registry-server" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.473576 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb35678-9696-4128-a9f8-8496102c6032" containerName="registry-server" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.474884 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.493355 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtr7g"] Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.544104 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-catalog-content\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.544167 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfj9x\" (UniqueName: \"kubernetes.io/projected/a4121ae7-6ba5-4841-ae27-29229ea0517b-kube-api-access-mfj9x\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.544449 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-utilities\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.647046 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-catalog-content\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.647099 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfj9x\" (UniqueName: \"kubernetes.io/projected/a4121ae7-6ba5-4841-ae27-29229ea0517b-kube-api-access-mfj9x\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.647186 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-utilities\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.647617 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-catalog-content\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.647631 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-utilities\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.671537 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfj9x\" (UniqueName: \"kubernetes.io/projected/a4121ae7-6ba5-4841-ae27-29229ea0517b-kube-api-access-mfj9x\") pod \"certified-operators-xtr7g\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:07 crc kubenswrapper[4872]: I0203 06:49:07.799391 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:08 crc kubenswrapper[4872]: I0203 06:49:08.128441 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:49:08 crc kubenswrapper[4872]: E0203 06:49:08.128943 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:49:08 crc kubenswrapper[4872]: I0203 06:49:08.533278 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xtr7g"] Feb 03 06:49:08 crc kubenswrapper[4872]: I0203 06:49:08.610101 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtr7g" event={"ID":"a4121ae7-6ba5-4841-ae27-29229ea0517b","Type":"ContainerStarted","Data":"5f26aec5fc93c4ff07cc9ebd2e8adc3c5cb5478945d553adb1aef8cdaf245572"} Feb 03 06:49:09 crc kubenswrapper[4872]: I0203 06:49:09.619853 4872 generic.go:334] "Generic (PLEG): container finished" podID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerID="a4fcf8defd02f6d7e07f7b66c7afbed06b34725116f1ee266c6623106217938c" exitCode=0 Feb 03 06:49:09 crc kubenswrapper[4872]: I0203 06:49:09.619904 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtr7g" event={"ID":"a4121ae7-6ba5-4841-ae27-29229ea0517b","Type":"ContainerDied","Data":"a4fcf8defd02f6d7e07f7b66c7afbed06b34725116f1ee266c6623106217938c"} Feb 03 06:49:10 crc kubenswrapper[4872]: I0203 06:49:10.630115 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtr7g" event={"ID":"a4121ae7-6ba5-4841-ae27-29229ea0517b","Type":"ContainerStarted","Data":"9f98c99d6bdc0e493bc4574b3e2caf08787f4598eb002cd5ff550c3735206b0d"} Feb 03 06:49:12 crc kubenswrapper[4872]: I0203 06:49:12.648473 4872 generic.go:334] "Generic (PLEG): container finished" podID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerID="9f98c99d6bdc0e493bc4574b3e2caf08787f4598eb002cd5ff550c3735206b0d" exitCode=0 Feb 03 06:49:12 crc kubenswrapper[4872]: I0203 06:49:12.648559 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtr7g" event={"ID":"a4121ae7-6ba5-4841-ae27-29229ea0517b","Type":"ContainerDied","Data":"9f98c99d6bdc0e493bc4574b3e2caf08787f4598eb002cd5ff550c3735206b0d"} Feb 03 06:49:13 crc kubenswrapper[4872]: I0203 06:49:13.660387 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtr7g" event={"ID":"a4121ae7-6ba5-4841-ae27-29229ea0517b","Type":"ContainerStarted","Data":"af08118d5d883168d9e0d675dce0636978692f9a8dd348ab1bc90a2a0c975647"} Feb 03 06:49:13 crc kubenswrapper[4872]: I0203 06:49:13.689631 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xtr7g" podStartSLOduration=3.230881218 podStartE2EDuration="6.689588227s" podCreationTimestamp="2026-02-03 06:49:07 +0000 UTC" firstStartedPulling="2026-02-03 06:49:09.621361014 +0000 UTC m=+2920.204052428" lastFinishedPulling="2026-02-03 06:49:13.080068013 +0000 UTC m=+2923.662759437" observedRunningTime="2026-02-03 06:49:13.687312452 +0000 UTC m=+2924.270003896" watchObservedRunningTime="2026-02-03 06:49:13.689588227 +0000 UTC m=+2924.272279661" Feb 03 06:49:17 crc kubenswrapper[4872]: I0203 06:49:17.799554 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:17 crc kubenswrapper[4872]: I0203 06:49:17.800029 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:18 crc kubenswrapper[4872]: I0203 06:49:18.852771 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xtr7g" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="registry-server" probeResult="failure" output=< Feb 03 06:49:18 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:49:18 crc kubenswrapper[4872]: > Feb 03 06:49:20 crc kubenswrapper[4872]: I0203 06:49:20.129228 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:49:20 crc kubenswrapper[4872]: E0203 06:49:20.129871 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:49:23 crc kubenswrapper[4872]: I0203 06:49:23.515816 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" podUID="cd3e162d-6733-47c4-b507-c08c577723d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:49:23 crc kubenswrapper[4872]: I0203 06:49:23.515832 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-9qph7" podUID="cd3e162d-6733-47c4-b507-c08c577723d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 06:49:27 crc kubenswrapper[4872]: I0203 06:49:27.856741 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:27 crc kubenswrapper[4872]: I0203 06:49:27.919710 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:28 crc kubenswrapper[4872]: I0203 06:49:28.099304 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xtr7g"] Feb 03 06:49:29 crc kubenswrapper[4872]: I0203 06:49:29.778610 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xtr7g" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="registry-server" containerID="cri-o://af08118d5d883168d9e0d675dce0636978692f9a8dd348ab1bc90a2a0c975647" gracePeriod=2 Feb 03 06:49:30 crc kubenswrapper[4872]: I0203 06:49:30.793121 4872 generic.go:334] "Generic (PLEG): container finished" podID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerID="af08118d5d883168d9e0d675dce0636978692f9a8dd348ab1bc90a2a0c975647" exitCode=0 Feb 03 06:49:30 crc kubenswrapper[4872]: I0203 06:49:30.793164 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtr7g" event={"ID":"a4121ae7-6ba5-4841-ae27-29229ea0517b","Type":"ContainerDied","Data":"af08118d5d883168d9e0d675dce0636978692f9a8dd348ab1bc90a2a0c975647"} Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.123912 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:49:31 crc kubenswrapper[4872]: E0203 06:49:31.124485 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.383594 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.458675 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-utilities\") pod \"a4121ae7-6ba5-4841-ae27-29229ea0517b\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.459523 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-utilities" (OuterVolumeSpecName: "utilities") pod "a4121ae7-6ba5-4841-ae27-29229ea0517b" (UID: "a4121ae7-6ba5-4841-ae27-29229ea0517b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.460146 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfj9x\" (UniqueName: \"kubernetes.io/projected/a4121ae7-6ba5-4841-ae27-29229ea0517b-kube-api-access-mfj9x\") pod \"a4121ae7-6ba5-4841-ae27-29229ea0517b\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.460345 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-catalog-content\") pod \"a4121ae7-6ba5-4841-ae27-29229ea0517b\" (UID: \"a4121ae7-6ba5-4841-ae27-29229ea0517b\") " Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.461471 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.493118 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4121ae7-6ba5-4841-ae27-29229ea0517b-kube-api-access-mfj9x" (OuterVolumeSpecName: "kube-api-access-mfj9x") pod "a4121ae7-6ba5-4841-ae27-29229ea0517b" (UID: "a4121ae7-6ba5-4841-ae27-29229ea0517b"). InnerVolumeSpecName "kube-api-access-mfj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.529829 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4121ae7-6ba5-4841-ae27-29229ea0517b" (UID: "a4121ae7-6ba5-4841-ae27-29229ea0517b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.563607 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfj9x\" (UniqueName: \"kubernetes.io/projected/a4121ae7-6ba5-4841-ae27-29229ea0517b-kube-api-access-mfj9x\") on node \"crc\" DevicePath \"\"" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.563646 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4121ae7-6ba5-4841-ae27-29229ea0517b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.806092 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xtr7g" event={"ID":"a4121ae7-6ba5-4841-ae27-29229ea0517b","Type":"ContainerDied","Data":"5f26aec5fc93c4ff07cc9ebd2e8adc3c5cb5478945d553adb1aef8cdaf245572"} Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.806149 4872 scope.go:117] "RemoveContainer" containerID="af08118d5d883168d9e0d675dce0636978692f9a8dd348ab1bc90a2a0c975647" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.806200 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xtr7g" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.842010 4872 scope.go:117] "RemoveContainer" containerID="9f98c99d6bdc0e493bc4574b3e2caf08787f4598eb002cd5ff550c3735206b0d" Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.863504 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xtr7g"] Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.871208 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xtr7g"] Feb 03 06:49:31 crc kubenswrapper[4872]: I0203 06:49:31.883886 4872 scope.go:117] "RemoveContainer" containerID="a4fcf8defd02f6d7e07f7b66c7afbed06b34725116f1ee266c6623106217938c" Feb 03 06:49:32 crc kubenswrapper[4872]: I0203 06:49:32.133464 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" path="/var/lib/kubelet/pods/a4121ae7-6ba5-4841-ae27-29229ea0517b/volumes" Feb 03 06:49:46 crc kubenswrapper[4872]: I0203 06:49:46.123063 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:49:46 crc kubenswrapper[4872]: E0203 06:49:46.123789 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:50:00 crc kubenswrapper[4872]: I0203 06:50:00.130292 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:50:00 crc kubenswrapper[4872]: E0203 06:50:00.131068 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:50:14 crc kubenswrapper[4872]: I0203 06:50:14.122570 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:50:14 crc kubenswrapper[4872]: E0203 06:50:14.123362 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:50:29 crc kubenswrapper[4872]: I0203 06:50:29.123237 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:50:29 crc kubenswrapper[4872]: E0203 06:50:29.123959 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:50:42 crc kubenswrapper[4872]: I0203 06:50:42.123217 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:50:42 crc kubenswrapper[4872]: E0203 06:50:42.124194 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:50:54 crc kubenswrapper[4872]: I0203 06:50:54.122428 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:50:54 crc kubenswrapper[4872]: E0203 06:50:54.124094 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:51:08 crc kubenswrapper[4872]: I0203 06:51:08.122943 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:51:08 crc kubenswrapper[4872]: E0203 06:51:08.123607 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:51:21 crc kubenswrapper[4872]: I0203 06:51:21.123329 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:51:21 crc kubenswrapper[4872]: E0203 06:51:21.124345 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:51:32 crc kubenswrapper[4872]: I0203 06:51:32.123697 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:51:32 crc kubenswrapper[4872]: E0203 06:51:32.124326 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:51:43 crc kubenswrapper[4872]: I0203 06:51:43.122535 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:51:43 crc kubenswrapper[4872]: E0203 06:51:43.124421 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:51:55 crc kubenswrapper[4872]: I0203 06:51:55.122549 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:51:55 crc kubenswrapper[4872]: E0203 06:51:55.123586 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:52:06 crc kubenswrapper[4872]: I0203 06:52:06.123273 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:52:06 crc kubenswrapper[4872]: E0203 06:52:06.123952 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:52:20 crc kubenswrapper[4872]: I0203 06:52:20.141711 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:52:20 crc kubenswrapper[4872]: E0203 06:52:20.142568 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:52:35 crc kubenswrapper[4872]: I0203 06:52:35.122545 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:52:35 crc kubenswrapper[4872]: E0203 06:52:35.123389 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:52:47 crc kubenswrapper[4872]: I0203 06:52:47.123163 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:52:47 crc kubenswrapper[4872]: E0203 06:52:47.124446 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:53:00 crc kubenswrapper[4872]: I0203 06:53:00.134316 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:53:00 crc kubenswrapper[4872]: E0203 06:53:00.143935 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:53:13 crc kubenswrapper[4872]: I0203 06:53:13.123333 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:53:13 crc kubenswrapper[4872]: I0203 06:53:13.773300 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"d9d4a724111c20653851d457af237fb8bdb8d8f30fc74b1ed0c382d96168d44b"} Feb 03 06:55:31 crc kubenswrapper[4872]: I0203 06:55:31.271787 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:55:31 crc kubenswrapper[4872]: I0203 06:55:31.272237 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:56:01 crc kubenswrapper[4872]: I0203 06:56:01.271300 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:56:01 crc kubenswrapper[4872]: I0203 06:56:01.271935 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.271345 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.272122 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.273756 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.274772 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9d4a724111c20653851d457af237fb8bdb8d8f30fc74b1ed0c382d96168d44b"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.274861 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://d9d4a724111c20653851d457af237fb8bdb8d8f30fc74b1ed0c382d96168d44b" gracePeriod=600 Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.995327 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="d9d4a724111c20653851d457af237fb8bdb8d8f30fc74b1ed0c382d96168d44b" exitCode=0 Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.995398 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"d9d4a724111c20653851d457af237fb8bdb8d8f30fc74b1ed0c382d96168d44b"} Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.995790 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede"} Feb 03 06:56:31 crc kubenswrapper[4872]: I0203 06:56:31.995825 4872 scope.go:117] "RemoveContainer" containerID="ec113935bed99c51c63b2c5bfcad0f4d405e1bbaf17f36f407a279084162716a" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.627506 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r5sgq"] Feb 03 06:56:43 crc kubenswrapper[4872]: E0203 06:56:43.628483 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="extract-utilities" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.628501 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="extract-utilities" Feb 03 06:56:43 crc kubenswrapper[4872]: E0203 06:56:43.628627 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="extract-content" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.628638 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="extract-content" Feb 03 06:56:43 crc kubenswrapper[4872]: E0203 06:56:43.628667 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="registry-server" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.628675 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="registry-server" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.628944 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4121ae7-6ba5-4841-ae27-29229ea0517b" containerName="registry-server" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.630563 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.667452 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5sgq"] Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.722670 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-utilities\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.722743 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-catalog-content\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.722774 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgb5\" (UniqueName: \"kubernetes.io/projected/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-kube-api-access-xlgb5\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.824612 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-utilities\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.824663 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-catalog-content\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.824705 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlgb5\" (UniqueName: \"kubernetes.io/projected/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-kube-api-access-xlgb5\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.825150 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-utilities\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.825280 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-catalog-content\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.862209 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlgb5\" (UniqueName: \"kubernetes.io/projected/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-kube-api-access-xlgb5\") pod \"community-operators-r5sgq\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:43 crc kubenswrapper[4872]: I0203 06:56:43.948574 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:44 crc kubenswrapper[4872]: I0203 06:56:44.693225 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5sgq"] Feb 03 06:56:45 crc kubenswrapper[4872]: I0203 06:56:45.114266 4872 generic.go:334] "Generic (PLEG): container finished" podID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerID="277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a" exitCode=0 Feb 03 06:56:45 crc kubenswrapper[4872]: I0203 06:56:45.114318 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5sgq" event={"ID":"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b","Type":"ContainerDied","Data":"277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a"} Feb 03 06:56:45 crc kubenswrapper[4872]: I0203 06:56:45.114349 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5sgq" event={"ID":"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b","Type":"ContainerStarted","Data":"60e7dbf6cabf9ce4bd1e50e49f3b19fb965bcff1d23567bc1b4106084d87675b"} Feb 03 06:56:45 crc kubenswrapper[4872]: I0203 06:56:45.118349 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 06:56:47 crc kubenswrapper[4872]: I0203 06:56:47.140591 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5sgq" event={"ID":"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b","Type":"ContainerStarted","Data":"470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb"} Feb 03 06:56:49 crc kubenswrapper[4872]: I0203 06:56:49.159929 4872 generic.go:334] "Generic (PLEG): container finished" podID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerID="470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb" exitCode=0 Feb 03 06:56:49 crc kubenswrapper[4872]: I0203 06:56:49.160009 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5sgq" event={"ID":"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b","Type":"ContainerDied","Data":"470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb"} Feb 03 06:56:50 crc kubenswrapper[4872]: I0203 06:56:50.177409 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5sgq" event={"ID":"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b","Type":"ContainerStarted","Data":"c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab"} Feb 03 06:56:50 crc kubenswrapper[4872]: I0203 06:56:50.199797 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r5sgq" podStartSLOduration=2.461104055 podStartE2EDuration="7.199779549s" podCreationTimestamp="2026-02-03 06:56:43 +0000 UTC" firstStartedPulling="2026-02-03 06:56:45.117813936 +0000 UTC m=+3375.700505380" lastFinishedPulling="2026-02-03 06:56:49.85648947 +0000 UTC m=+3380.439180874" observedRunningTime="2026-02-03 06:56:50.195239309 +0000 UTC m=+3380.777930733" watchObservedRunningTime="2026-02-03 06:56:50.199779549 +0000 UTC m=+3380.782470963" Feb 03 06:56:53 crc kubenswrapper[4872]: I0203 06:56:53.949132 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:53 crc kubenswrapper[4872]: I0203 06:56:53.949738 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:56:54 crc kubenswrapper[4872]: I0203 06:56:54.993583 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r5sgq" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="registry-server" probeResult="failure" output=< Feb 03 06:56:54 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:56:54 crc kubenswrapper[4872]: > Feb 03 06:57:03 crc kubenswrapper[4872]: I0203 06:57:03.994794 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:57:04 crc kubenswrapper[4872]: I0203 06:57:04.066444 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:57:04 crc kubenswrapper[4872]: I0203 06:57:04.239451 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5sgq"] Feb 03 06:57:05 crc kubenswrapper[4872]: I0203 06:57:05.305651 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r5sgq" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="registry-server" containerID="cri-o://c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab" gracePeriod=2 Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.128861 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.246327 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlgb5\" (UniqueName: \"kubernetes.io/projected/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-kube-api-access-xlgb5\") pod \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.246631 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-utilities\") pod \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.246742 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-catalog-content\") pod \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\" (UID: \"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b\") " Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.248593 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-utilities" (OuterVolumeSpecName: "utilities") pod "23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" (UID: "23f7bad8-7c3f-4d87-a6e0-f09d56578a1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.266489 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-kube-api-access-xlgb5" (OuterVolumeSpecName: "kube-api-access-xlgb5") pod "23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" (UID: "23f7bad8-7c3f-4d87-a6e0-f09d56578a1b"). InnerVolumeSpecName "kube-api-access-xlgb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.319281 4872 generic.go:334] "Generic (PLEG): container finished" podID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerID="c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab" exitCode=0 Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.319326 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5sgq" event={"ID":"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b","Type":"ContainerDied","Data":"c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab"} Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.319352 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5sgq" event={"ID":"23f7bad8-7c3f-4d87-a6e0-f09d56578a1b","Type":"ContainerDied","Data":"60e7dbf6cabf9ce4bd1e50e49f3b19fb965bcff1d23567bc1b4106084d87675b"} Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.319373 4872 scope.go:117] "RemoveContainer" containerID="c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.319450 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5sgq" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.326868 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" (UID: "23f7bad8-7c3f-4d87-a6e0-f09d56578a1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.342632 4872 scope.go:117] "RemoveContainer" containerID="470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.349395 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlgb5\" (UniqueName: \"kubernetes.io/projected/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-kube-api-access-xlgb5\") on node \"crc\" DevicePath \"\"" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.349419 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.349428 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.369871 4872 scope.go:117] "RemoveContainer" containerID="277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.436256 4872 scope.go:117] "RemoveContainer" containerID="c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab" Feb 03 06:57:06 crc kubenswrapper[4872]: E0203 06:57:06.436925 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab\": container with ID starting with c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab not found: ID does not exist" containerID="c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.437056 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab"} err="failed to get container status \"c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab\": rpc error: code = NotFound desc = could not find container \"c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab\": container with ID starting with c2cb2bd51e8e500735f5e680832a4d66743efe64dd7b8331ac50de89decc76ab not found: ID does not exist" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.437146 4872 scope.go:117] "RemoveContainer" containerID="470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb" Feb 03 06:57:06 crc kubenswrapper[4872]: E0203 06:57:06.438199 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb\": container with ID starting with 470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb not found: ID does not exist" containerID="470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.438231 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb"} err="failed to get container status \"470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb\": rpc error: code = NotFound desc = could not find container \"470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb\": container with ID starting with 470b15a49fb32f4f6c3d4f7c1ee4d3c06e8fbb9922f60498e6bff973a48f3ffb not found: ID does not exist" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.438251 4872 scope.go:117] "RemoveContainer" containerID="277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a" Feb 03 06:57:06 crc kubenswrapper[4872]: E0203 06:57:06.438511 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a\": container with ID starting with 277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a not found: ID does not exist" containerID="277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.438615 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a"} err="failed to get container status \"277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a\": rpc error: code = NotFound desc = could not find container \"277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a\": container with ID starting with 277bddb4458613e4d58aa1bbdfef2000615a7ef23fb918f5633abb31b136da8a not found: ID does not exist" Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.660200 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5sgq"] Feb 03 06:57:06 crc kubenswrapper[4872]: I0203 06:57:06.669672 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r5sgq"] Feb 03 06:57:08 crc kubenswrapper[4872]: I0203 06:57:08.132211 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" path="/var/lib/kubelet/pods/23f7bad8-7c3f-4d87-a6e0-f09d56578a1b/volumes" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.324221 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8nfb8"] Feb 03 06:57:20 crc kubenswrapper[4872]: E0203 06:57:20.325290 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="extract-utilities" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.325306 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="extract-utilities" Feb 03 06:57:20 crc kubenswrapper[4872]: E0203 06:57:20.325332 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="registry-server" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.325339 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="registry-server" Feb 03 06:57:20 crc kubenswrapper[4872]: E0203 06:57:20.325358 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="extract-content" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.325366 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="extract-content" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.325610 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f7bad8-7c3f-4d87-a6e0-f09d56578a1b" containerName="registry-server" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.327042 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.338529 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nfb8"] Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.444534 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-utilities\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.444707 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wctks\" (UniqueName: \"kubernetes.io/projected/23b554c2-2da2-49cb-a767-51c8a32cf209-kube-api-access-wctks\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.444756 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-catalog-content\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.546387 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-utilities\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.546516 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wctks\" (UniqueName: \"kubernetes.io/projected/23b554c2-2da2-49cb-a767-51c8a32cf209-kube-api-access-wctks\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.546549 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-catalog-content\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.547047 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-catalog-content\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.547275 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-utilities\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.573830 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wctks\" (UniqueName: \"kubernetes.io/projected/23b554c2-2da2-49cb-a767-51c8a32cf209-kube-api-access-wctks\") pod \"redhat-marketplace-8nfb8\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:20 crc kubenswrapper[4872]: I0203 06:57:20.690954 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:21 crc kubenswrapper[4872]: I0203 06:57:21.285226 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nfb8"] Feb 03 06:57:21 crc kubenswrapper[4872]: I0203 06:57:21.452896 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nfb8" event={"ID":"23b554c2-2da2-49cb-a767-51c8a32cf209","Type":"ContainerStarted","Data":"0c3752f3bbf743c13015a9ee866bb78c301273bd55c177dbf1e8e41b06a52606"} Feb 03 06:57:22 crc kubenswrapper[4872]: I0203 06:57:22.464256 4872 generic.go:334] "Generic (PLEG): container finished" podID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerID="5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb" exitCode=0 Feb 03 06:57:22 crc kubenswrapper[4872]: I0203 06:57:22.464543 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nfb8" event={"ID":"23b554c2-2da2-49cb-a767-51c8a32cf209","Type":"ContainerDied","Data":"5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb"} Feb 03 06:57:23 crc kubenswrapper[4872]: I0203 06:57:23.481024 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nfb8" event={"ID":"23b554c2-2da2-49cb-a767-51c8a32cf209","Type":"ContainerStarted","Data":"c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0"} Feb 03 06:57:25 crc kubenswrapper[4872]: I0203 06:57:25.508102 4872 generic.go:334] "Generic (PLEG): container finished" podID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerID="c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0" exitCode=0 Feb 03 06:57:25 crc kubenswrapper[4872]: I0203 06:57:25.508226 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nfb8" event={"ID":"23b554c2-2da2-49cb-a767-51c8a32cf209","Type":"ContainerDied","Data":"c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0"} Feb 03 06:57:27 crc kubenswrapper[4872]: I0203 06:57:27.527101 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nfb8" event={"ID":"23b554c2-2da2-49cb-a767-51c8a32cf209","Type":"ContainerStarted","Data":"992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054"} Feb 03 06:57:27 crc kubenswrapper[4872]: I0203 06:57:27.546536 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8nfb8" podStartSLOduration=2.987880647 podStartE2EDuration="7.546515879s" podCreationTimestamp="2026-02-03 06:57:20 +0000 UTC" firstStartedPulling="2026-02-03 06:57:22.467271803 +0000 UTC m=+3413.049963217" lastFinishedPulling="2026-02-03 06:57:27.025907045 +0000 UTC m=+3417.608598449" observedRunningTime="2026-02-03 06:57:27.544400069 +0000 UTC m=+3418.127091503" watchObservedRunningTime="2026-02-03 06:57:27.546515879 +0000 UTC m=+3418.129207293" Feb 03 06:57:30 crc kubenswrapper[4872]: I0203 06:57:30.691354 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:30 crc kubenswrapper[4872]: I0203 06:57:30.693910 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:30 crc kubenswrapper[4872]: I0203 06:57:30.739239 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:32 crc kubenswrapper[4872]: I0203 06:57:32.619896 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:32 crc kubenswrapper[4872]: I0203 06:57:32.676089 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nfb8"] Feb 03 06:57:34 crc kubenswrapper[4872]: I0203 06:57:34.585627 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8nfb8" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerName="registry-server" containerID="cri-o://992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054" gracePeriod=2 Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.331504 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.474347 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-catalog-content\") pod \"23b554c2-2da2-49cb-a767-51c8a32cf209\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.474751 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wctks\" (UniqueName: \"kubernetes.io/projected/23b554c2-2da2-49cb-a767-51c8a32cf209-kube-api-access-wctks\") pod \"23b554c2-2da2-49cb-a767-51c8a32cf209\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.474800 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-utilities\") pod \"23b554c2-2da2-49cb-a767-51c8a32cf209\" (UID: \"23b554c2-2da2-49cb-a767-51c8a32cf209\") " Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.476160 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-utilities" (OuterVolumeSpecName: "utilities") pod "23b554c2-2da2-49cb-a767-51c8a32cf209" (UID: "23b554c2-2da2-49cb-a767-51c8a32cf209"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.511879 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b554c2-2da2-49cb-a767-51c8a32cf209-kube-api-access-wctks" (OuterVolumeSpecName: "kube-api-access-wctks") pod "23b554c2-2da2-49cb-a767-51c8a32cf209" (UID: "23b554c2-2da2-49cb-a767-51c8a32cf209"). InnerVolumeSpecName "kube-api-access-wctks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.527538 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23b554c2-2da2-49cb-a767-51c8a32cf209" (UID: "23b554c2-2da2-49cb-a767-51c8a32cf209"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.577280 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.577304 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23b554c2-2da2-49cb-a767-51c8a32cf209-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.577316 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wctks\" (UniqueName: \"kubernetes.io/projected/23b554c2-2da2-49cb-a767-51c8a32cf209-kube-api-access-wctks\") on node \"crc\" DevicePath \"\"" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.602643 4872 generic.go:334] "Generic (PLEG): container finished" podID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerID="992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054" exitCode=0 Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.603028 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nfb8" event={"ID":"23b554c2-2da2-49cb-a767-51c8a32cf209","Type":"ContainerDied","Data":"992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054"} Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.603067 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nfb8" event={"ID":"23b554c2-2da2-49cb-a767-51c8a32cf209","Type":"ContainerDied","Data":"0c3752f3bbf743c13015a9ee866bb78c301273bd55c177dbf1e8e41b06a52606"} Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.603090 4872 scope.go:117] "RemoveContainer" containerID="992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.603245 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nfb8" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.645765 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nfb8"] Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.647206 4872 scope.go:117] "RemoveContainer" containerID="c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.654261 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nfb8"] Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.688027 4872 scope.go:117] "RemoveContainer" containerID="5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.717741 4872 scope.go:117] "RemoveContainer" containerID="992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054" Feb 03 06:57:35 crc kubenswrapper[4872]: E0203 06:57:35.718133 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054\": container with ID starting with 992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054 not found: ID does not exist" containerID="992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.718182 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054"} err="failed to get container status \"992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054\": rpc error: code = NotFound desc = could not find container \"992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054\": container with ID starting with 992aff7f204c85d19aa9228fb9b6eb0b5826050983b4d120fec0fac2f5f7e054 not found: ID does not exist" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.718203 4872 scope.go:117] "RemoveContainer" containerID="c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0" Feb 03 06:57:35 crc kubenswrapper[4872]: E0203 06:57:35.718410 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0\": container with ID starting with c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0 not found: ID does not exist" containerID="c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.718430 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0"} err="failed to get container status \"c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0\": rpc error: code = NotFound desc = could not find container \"c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0\": container with ID starting with c062ab08be074a99a12a7ba33cd8467aba91cbfdee700cb5439be29d4a0aa4e0 not found: ID does not exist" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.718460 4872 scope.go:117] "RemoveContainer" containerID="5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb" Feb 03 06:57:35 crc kubenswrapper[4872]: E0203 06:57:35.718628 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb\": container with ID starting with 5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb not found: ID does not exist" containerID="5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb" Feb 03 06:57:35 crc kubenswrapper[4872]: I0203 06:57:35.718646 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb"} err="failed to get container status \"5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb\": rpc error: code = NotFound desc = could not find container \"5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb\": container with ID starting with 5264b89dc824ed7da88e61078c67002c37989df82f69e2970c204b7ebff84cbb not found: ID does not exist" Feb 03 06:57:36 crc kubenswrapper[4872]: I0203 06:57:36.135808 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" path="/var/lib/kubelet/pods/23b554c2-2da2-49cb-a767-51c8a32cf209/volumes" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.474679 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-67zhq"] Feb 03 06:58:04 crc kubenswrapper[4872]: E0203 06:58:04.477641 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerName="extract-content" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.477658 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerName="extract-content" Feb 03 06:58:04 crc kubenswrapper[4872]: E0203 06:58:04.477672 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerName="registry-server" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.477678 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerName="registry-server" Feb 03 06:58:04 crc kubenswrapper[4872]: E0203 06:58:04.477714 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerName="extract-utilities" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.477721 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerName="extract-utilities" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.477910 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b554c2-2da2-49cb-a767-51c8a32cf209" containerName="registry-server" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.486132 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.492003 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-67zhq"] Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.644198 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-utilities\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.644256 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhl8\" (UniqueName: \"kubernetes.io/projected/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-kube-api-access-lqhl8\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.644285 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-catalog-content\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.746737 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-utilities\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.746794 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhl8\" (UniqueName: \"kubernetes.io/projected/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-kube-api-access-lqhl8\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.746823 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-catalog-content\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.747259 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-utilities\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.747298 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-catalog-content\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.783707 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhl8\" (UniqueName: \"kubernetes.io/projected/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-kube-api-access-lqhl8\") pod \"redhat-operators-67zhq\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:04 crc kubenswrapper[4872]: I0203 06:58:04.814377 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:05 crc kubenswrapper[4872]: I0203 06:58:05.288436 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-67zhq"] Feb 03 06:58:05 crc kubenswrapper[4872]: I0203 06:58:05.886986 4872 generic.go:334] "Generic (PLEG): container finished" podID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerID="72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887" exitCode=0 Feb 03 06:58:05 crc kubenswrapper[4872]: I0203 06:58:05.887116 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-67zhq" event={"ID":"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e","Type":"ContainerDied","Data":"72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887"} Feb 03 06:58:05 crc kubenswrapper[4872]: I0203 06:58:05.887307 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-67zhq" event={"ID":"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e","Type":"ContainerStarted","Data":"137896270d75805bd0c28330ff23b79f71715f438c1253e3d058a551255e64b4"} Feb 03 06:58:06 crc kubenswrapper[4872]: I0203 06:58:06.897961 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-67zhq" event={"ID":"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e","Type":"ContainerStarted","Data":"6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc"} Feb 03 06:58:20 crc kubenswrapper[4872]: I0203 06:58:20.008332 4872 generic.go:334] "Generic (PLEG): container finished" podID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerID="6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc" exitCode=0 Feb 03 06:58:20 crc kubenswrapper[4872]: I0203 06:58:20.008399 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-67zhq" event={"ID":"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e","Type":"ContainerDied","Data":"6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc"} Feb 03 06:58:22 crc kubenswrapper[4872]: I0203 06:58:22.031278 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-67zhq" event={"ID":"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e","Type":"ContainerStarted","Data":"213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04"} Feb 03 06:58:22 crc kubenswrapper[4872]: I0203 06:58:22.059611 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-67zhq" podStartSLOduration=2.742232232 podStartE2EDuration="18.059593797s" podCreationTimestamp="2026-02-03 06:58:04 +0000 UTC" firstStartedPulling="2026-02-03 06:58:05.891005543 +0000 UTC m=+3456.473696957" lastFinishedPulling="2026-02-03 06:58:21.208367108 +0000 UTC m=+3471.791058522" observedRunningTime="2026-02-03 06:58:22.050433107 +0000 UTC m=+3472.633124531" watchObservedRunningTime="2026-02-03 06:58:22.059593797 +0000 UTC m=+3472.642285201" Feb 03 06:58:24 crc kubenswrapper[4872]: I0203 06:58:24.814893 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:24 crc kubenswrapper[4872]: I0203 06:58:24.815225 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:25 crc kubenswrapper[4872]: I0203 06:58:25.864404 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-67zhq" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="registry-server" probeResult="failure" output=< Feb 03 06:58:25 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:58:25 crc kubenswrapper[4872]: > Feb 03 06:58:31 crc kubenswrapper[4872]: I0203 06:58:31.272038 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:58:31 crc kubenswrapper[4872]: I0203 06:58:31.272732 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:58:35 crc kubenswrapper[4872]: I0203 06:58:35.866203 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-67zhq" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="registry-server" probeResult="failure" output=< Feb 03 06:58:35 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 06:58:35 crc kubenswrapper[4872]: > Feb 03 06:58:44 crc kubenswrapper[4872]: I0203 06:58:44.867995 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:44 crc kubenswrapper[4872]: I0203 06:58:44.937004 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:45 crc kubenswrapper[4872]: I0203 06:58:45.143193 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-67zhq"] Feb 03 06:58:46 crc kubenswrapper[4872]: I0203 06:58:46.224987 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-67zhq" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="registry-server" containerID="cri-o://213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04" gracePeriod=2 Feb 03 06:58:46 crc kubenswrapper[4872]: I0203 06:58:46.930004 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.024159 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-catalog-content\") pod \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.024320 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqhl8\" (UniqueName: \"kubernetes.io/projected/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-kube-api-access-lqhl8\") pod \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.024384 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-utilities\") pod \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\" (UID: \"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e\") " Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.025330 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-utilities" (OuterVolumeSpecName: "utilities") pod "4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" (UID: "4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.045879 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-kube-api-access-lqhl8" (OuterVolumeSpecName: "kube-api-access-lqhl8") pod "4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" (UID: "4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e"). InnerVolumeSpecName "kube-api-access-lqhl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.128182 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqhl8\" (UniqueName: \"kubernetes.io/projected/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-kube-api-access-lqhl8\") on node \"crc\" DevicePath \"\"" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.128216 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.239165 4872 generic.go:334] "Generic (PLEG): container finished" podID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerID="213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04" exitCode=0 Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.239206 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-67zhq" event={"ID":"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e","Type":"ContainerDied","Data":"213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04"} Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.239231 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-67zhq" event={"ID":"4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e","Type":"ContainerDied","Data":"137896270d75805bd0c28330ff23b79f71715f438c1253e3d058a551255e64b4"} Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.239247 4872 scope.go:117] "RemoveContainer" containerID="213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.239248 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-67zhq" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.256202 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" (UID: "4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.273859 4872 scope.go:117] "RemoveContainer" containerID="6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.305177 4872 scope.go:117] "RemoveContainer" containerID="72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.332273 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.350175 4872 scope.go:117] "RemoveContainer" containerID="213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04" Feb 03 06:58:47 crc kubenswrapper[4872]: E0203 06:58:47.350776 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04\": container with ID starting with 213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04 not found: ID does not exist" containerID="213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.350819 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04"} err="failed to get container status \"213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04\": rpc error: code = NotFound desc = could not find container \"213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04\": container with ID starting with 213b97a3a8e64cbb350001f8b1c538608bbae0671e61bd5741a1dcf7cc5f6d04 not found: ID does not exist" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.350843 4872 scope.go:117] "RemoveContainer" containerID="6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc" Feb 03 06:58:47 crc kubenswrapper[4872]: E0203 06:58:47.351124 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc\": container with ID starting with 6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc not found: ID does not exist" containerID="6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.351146 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc"} err="failed to get container status \"6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc\": rpc error: code = NotFound desc = could not find container \"6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc\": container with ID starting with 6ecb0708e521e4bd9ce575a5ff900d469177f8c8eda846117979a803ff1e6cfc not found: ID does not exist" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.351159 4872 scope.go:117] "RemoveContainer" containerID="72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887" Feb 03 06:58:47 crc kubenswrapper[4872]: E0203 06:58:47.351404 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887\": container with ID starting with 72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887 not found: ID does not exist" containerID="72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.351444 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887"} err="failed to get container status \"72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887\": rpc error: code = NotFound desc = could not find container \"72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887\": container with ID starting with 72883c5a665f5ec2951bd5d7b09badfdb24a14348c94605be8cb3c4a4638c887 not found: ID does not exist" Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.583193 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-67zhq"] Feb 03 06:58:47 crc kubenswrapper[4872]: I0203 06:58:47.593048 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-67zhq"] Feb 03 06:58:48 crc kubenswrapper[4872]: I0203 06:58:48.134987 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" path="/var/lib/kubelet/pods/4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e/volumes" Feb 03 06:59:01 crc kubenswrapper[4872]: I0203 06:59:01.271348 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:59:01 crc kubenswrapper[4872]: I0203 06:59:01.273012 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:59:31 crc kubenswrapper[4872]: I0203 06:59:31.271985 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 06:59:31 crc kubenswrapper[4872]: I0203 06:59:31.272938 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 06:59:31 crc kubenswrapper[4872]: I0203 06:59:31.273018 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 06:59:31 crc kubenswrapper[4872]: I0203 06:59:31.274265 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 06:59:31 crc kubenswrapper[4872]: I0203 06:59:31.274332 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" gracePeriod=600 Feb 03 06:59:31 crc kubenswrapper[4872]: I0203 06:59:31.681263 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" exitCode=0 Feb 03 06:59:31 crc kubenswrapper[4872]: I0203 06:59:31.681312 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede"} Feb 03 06:59:31 crc kubenswrapper[4872]: I0203 06:59:31.681348 4872 scope.go:117] "RemoveContainer" containerID="d9d4a724111c20653851d457af237fb8bdb8d8f30fc74b1ed0c382d96168d44b" Feb 03 06:59:32 crc kubenswrapper[4872]: E0203 06:59:32.154086 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:59:32 crc kubenswrapper[4872]: I0203 06:59:32.723581 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 06:59:32 crc kubenswrapper[4872]: E0203 06:59:32.723854 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:59:45 crc kubenswrapper[4872]: I0203 06:59:45.123486 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 06:59:45 crc kubenswrapper[4872]: E0203 06:59:45.124677 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.121875 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nwkc"] Feb 03 06:59:47 crc kubenswrapper[4872]: E0203 06:59:47.123649 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="registry-server" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.123807 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="registry-server" Feb 03 06:59:47 crc kubenswrapper[4872]: E0203 06:59:47.123907 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="extract-content" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.123987 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="extract-content" Feb 03 06:59:47 crc kubenswrapper[4872]: E0203 06:59:47.124080 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="extract-utilities" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.124175 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="extract-utilities" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.124478 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c7e7dd6-ed8d-4a59-8ae4-1d0d6f3abd3e" containerName="registry-server" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.126236 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.135614 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nwkc"] Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.279307 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8gt\" (UniqueName: \"kubernetes.io/projected/685beaa3-0b95-49ae-a376-bcc4f4dd717c-kube-api-access-td8gt\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.279807 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-catalog-content\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.280009 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-utilities\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.381298 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8gt\" (UniqueName: \"kubernetes.io/projected/685beaa3-0b95-49ae-a376-bcc4f4dd717c-kube-api-access-td8gt\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.381411 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-catalog-content\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.381465 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-utilities\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.382086 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-utilities\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.382672 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-catalog-content\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.412663 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8gt\" (UniqueName: \"kubernetes.io/projected/685beaa3-0b95-49ae-a376-bcc4f4dd717c-kube-api-access-td8gt\") pod \"certified-operators-9nwkc\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:47 crc kubenswrapper[4872]: I0203 06:59:47.454129 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:48 crc kubenswrapper[4872]: I0203 06:59:48.035009 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nwkc"] Feb 03 06:59:48 crc kubenswrapper[4872]: I0203 06:59:48.853435 4872 generic.go:334] "Generic (PLEG): container finished" podID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerID="06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd" exitCode=0 Feb 03 06:59:48 crc kubenswrapper[4872]: I0203 06:59:48.853489 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nwkc" event={"ID":"685beaa3-0b95-49ae-a376-bcc4f4dd717c","Type":"ContainerDied","Data":"06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd"} Feb 03 06:59:48 crc kubenswrapper[4872]: I0203 06:59:48.853668 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nwkc" event={"ID":"685beaa3-0b95-49ae-a376-bcc4f4dd717c","Type":"ContainerStarted","Data":"cf7f7bc66c4a11fbbaa4f544126503da0545755bf86530482da0bf246d87bd14"} Feb 03 06:59:50 crc kubenswrapper[4872]: I0203 06:59:50.874051 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nwkc" event={"ID":"685beaa3-0b95-49ae-a376-bcc4f4dd717c","Type":"ContainerStarted","Data":"8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8"} Feb 03 06:59:53 crc kubenswrapper[4872]: I0203 06:59:53.899904 4872 generic.go:334] "Generic (PLEG): container finished" podID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerID="8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8" exitCode=0 Feb 03 06:59:53 crc kubenswrapper[4872]: I0203 06:59:53.899951 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nwkc" event={"ID":"685beaa3-0b95-49ae-a376-bcc4f4dd717c","Type":"ContainerDied","Data":"8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8"} Feb 03 06:59:54 crc kubenswrapper[4872]: I0203 06:59:54.910006 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nwkc" event={"ID":"685beaa3-0b95-49ae-a376-bcc4f4dd717c","Type":"ContainerStarted","Data":"85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38"} Feb 03 06:59:54 crc kubenswrapper[4872]: I0203 06:59:54.929014 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nwkc" podStartSLOduration=2.408287086 podStartE2EDuration="7.928995586s" podCreationTimestamp="2026-02-03 06:59:47 +0000 UTC" firstStartedPulling="2026-02-03 06:59:48.855709127 +0000 UTC m=+3559.438400541" lastFinishedPulling="2026-02-03 06:59:54.376417627 +0000 UTC m=+3564.959109041" observedRunningTime="2026-02-03 06:59:54.926521026 +0000 UTC m=+3565.509212440" watchObservedRunningTime="2026-02-03 06:59:54.928995586 +0000 UTC m=+3565.511687000" Feb 03 06:59:57 crc kubenswrapper[4872]: I0203 06:59:57.123317 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 06:59:57 crc kubenswrapper[4872]: E0203 06:59:57.124149 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 06:59:57 crc kubenswrapper[4872]: I0203 06:59:57.454811 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:57 crc kubenswrapper[4872]: I0203 06:59:57.454983 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 06:59:57 crc kubenswrapper[4872]: I0203 06:59:57.503874 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.198478 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r"] Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.200961 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.205492 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r"] Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.235870 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.235879 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.342665 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb97a14f-3aab-4f43-880f-e849888880c6-secret-volume\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.343116 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zcm8\" (UniqueName: \"kubernetes.io/projected/cb97a14f-3aab-4f43-880f-e849888880c6-kube-api-access-2zcm8\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.343596 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb97a14f-3aab-4f43-880f-e849888880c6-config-volume\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.445653 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb97a14f-3aab-4f43-880f-e849888880c6-secret-volume\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.445792 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zcm8\" (UniqueName: \"kubernetes.io/projected/cb97a14f-3aab-4f43-880f-e849888880c6-kube-api-access-2zcm8\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.445858 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb97a14f-3aab-4f43-880f-e849888880c6-config-volume\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.446659 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb97a14f-3aab-4f43-880f-e849888880c6-config-volume\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.452135 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb97a14f-3aab-4f43-880f-e849888880c6-secret-volume\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.469367 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zcm8\" (UniqueName: \"kubernetes.io/projected/cb97a14f-3aab-4f43-880f-e849888880c6-kube-api-access-2zcm8\") pod \"collect-profiles-29501700-tz28r\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:00 crc kubenswrapper[4872]: I0203 07:00:00.562120 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:01 crc kubenswrapper[4872]: I0203 07:00:01.099974 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r"] Feb 03 07:00:01 crc kubenswrapper[4872]: I0203 07:00:01.976098 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" event={"ID":"cb97a14f-3aab-4f43-880f-e849888880c6","Type":"ContainerStarted","Data":"53f7de93c5580ff3c174a88602c0d91fa165620437d1c2e35c5a0f20a68d994a"} Feb 03 07:00:01 crc kubenswrapper[4872]: I0203 07:00:01.977535 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" event={"ID":"cb97a14f-3aab-4f43-880f-e849888880c6","Type":"ContainerStarted","Data":"ed295edbaef165206dcfc34ebe7e39b22ba27ee21cfb80acc49ebd032e131f8a"} Feb 03 07:00:01 crc kubenswrapper[4872]: I0203 07:00:01.996893 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" podStartSLOduration=1.996874677 podStartE2EDuration="1.996874677s" podCreationTimestamp="2026-02-03 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 07:00:01.989841467 +0000 UTC m=+3572.572532911" watchObservedRunningTime="2026-02-03 07:00:01.996874677 +0000 UTC m=+3572.579566091" Feb 03 07:00:02 crc kubenswrapper[4872]: I0203 07:00:02.987730 4872 generic.go:334] "Generic (PLEG): container finished" podID="cb97a14f-3aab-4f43-880f-e849888880c6" containerID="53f7de93c5580ff3c174a88602c0d91fa165620437d1c2e35c5a0f20a68d994a" exitCode=0 Feb 03 07:00:02 crc kubenswrapper[4872]: I0203 07:00:02.988045 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" event={"ID":"cb97a14f-3aab-4f43-880f-e849888880c6","Type":"ContainerDied","Data":"53f7de93c5580ff3c174a88602c0d91fa165620437d1c2e35c5a0f20a68d994a"} Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.420023 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.517747 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zcm8\" (UniqueName: \"kubernetes.io/projected/cb97a14f-3aab-4f43-880f-e849888880c6-kube-api-access-2zcm8\") pod \"cb97a14f-3aab-4f43-880f-e849888880c6\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.517941 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb97a14f-3aab-4f43-880f-e849888880c6-secret-volume\") pod \"cb97a14f-3aab-4f43-880f-e849888880c6\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.518018 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb97a14f-3aab-4f43-880f-e849888880c6-config-volume\") pod \"cb97a14f-3aab-4f43-880f-e849888880c6\" (UID: \"cb97a14f-3aab-4f43-880f-e849888880c6\") " Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.519142 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb97a14f-3aab-4f43-880f-e849888880c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb97a14f-3aab-4f43-880f-e849888880c6" (UID: "cb97a14f-3aab-4f43-880f-e849888880c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.526047 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb97a14f-3aab-4f43-880f-e849888880c6-kube-api-access-2zcm8" (OuterVolumeSpecName: "kube-api-access-2zcm8") pod "cb97a14f-3aab-4f43-880f-e849888880c6" (UID: "cb97a14f-3aab-4f43-880f-e849888880c6"). InnerVolumeSpecName "kube-api-access-2zcm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.527527 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb97a14f-3aab-4f43-880f-e849888880c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb97a14f-3aab-4f43-880f-e849888880c6" (UID: "cb97a14f-3aab-4f43-880f-e849888880c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.620033 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb97a14f-3aab-4f43-880f-e849888880c6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.620081 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb97a14f-3aab-4f43-880f-e849888880c6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 07:00:04 crc kubenswrapper[4872]: I0203 07:00:04.620093 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zcm8\" (UniqueName: \"kubernetes.io/projected/cb97a14f-3aab-4f43-880f-e849888880c6-kube-api-access-2zcm8\") on node \"crc\" DevicePath \"\"" Feb 03 07:00:05 crc kubenswrapper[4872]: I0203 07:00:05.005586 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" event={"ID":"cb97a14f-3aab-4f43-880f-e849888880c6","Type":"ContainerDied","Data":"ed295edbaef165206dcfc34ebe7e39b22ba27ee21cfb80acc49ebd032e131f8a"} Feb 03 07:00:05 crc kubenswrapper[4872]: I0203 07:00:05.005639 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed295edbaef165206dcfc34ebe7e39b22ba27ee21cfb80acc49ebd032e131f8a" Feb 03 07:00:05 crc kubenswrapper[4872]: I0203 07:00:05.005650 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501700-tz28r" Feb 03 07:00:05 crc kubenswrapper[4872]: I0203 07:00:05.083949 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x"] Feb 03 07:00:05 crc kubenswrapper[4872]: I0203 07:00:05.093914 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501655-vb28x"] Feb 03 07:00:06 crc kubenswrapper[4872]: I0203 07:00:06.163145 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231294bd-0890-405c-b99a-91471441e1e8" path="/var/lib/kubelet/pods/231294bd-0890-405c-b99a-91471441e1e8/volumes" Feb 03 07:00:07 crc kubenswrapper[4872]: I0203 07:00:07.511754 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 07:00:07 crc kubenswrapper[4872]: I0203 07:00:07.585455 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nwkc"] Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.026478 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nwkc" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerName="registry-server" containerID="cri-o://85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38" gracePeriod=2 Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.122792 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:00:08 crc kubenswrapper[4872]: E0203 07:00:08.123132 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.699445 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.804654 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td8gt\" (UniqueName: \"kubernetes.io/projected/685beaa3-0b95-49ae-a376-bcc4f4dd717c-kube-api-access-td8gt\") pod \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.804934 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-catalog-content\") pod \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.805153 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-utilities\") pod \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\" (UID: \"685beaa3-0b95-49ae-a376-bcc4f4dd717c\") " Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.809177 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-utilities" (OuterVolumeSpecName: "utilities") pod "685beaa3-0b95-49ae-a376-bcc4f4dd717c" (UID: "685beaa3-0b95-49ae-a376-bcc4f4dd717c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.817975 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/685beaa3-0b95-49ae-a376-bcc4f4dd717c-kube-api-access-td8gt" (OuterVolumeSpecName: "kube-api-access-td8gt") pod "685beaa3-0b95-49ae-a376-bcc4f4dd717c" (UID: "685beaa3-0b95-49ae-a376-bcc4f4dd717c"). InnerVolumeSpecName "kube-api-access-td8gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.852596 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "685beaa3-0b95-49ae-a376-bcc4f4dd717c" (UID: "685beaa3-0b95-49ae-a376-bcc4f4dd717c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.908498 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td8gt\" (UniqueName: \"kubernetes.io/projected/685beaa3-0b95-49ae-a376-bcc4f4dd717c-kube-api-access-td8gt\") on node \"crc\" DevicePath \"\"" Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.908524 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:00:08 crc kubenswrapper[4872]: I0203 07:00:08.908533 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/685beaa3-0b95-49ae-a376-bcc4f4dd717c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.037198 4872 generic.go:334] "Generic (PLEG): container finished" podID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerID="85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38" exitCode=0 Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.037289 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nwkc" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.037278 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nwkc" event={"ID":"685beaa3-0b95-49ae-a376-bcc4f4dd717c","Type":"ContainerDied","Data":"85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38"} Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.038149 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nwkc" event={"ID":"685beaa3-0b95-49ae-a376-bcc4f4dd717c","Type":"ContainerDied","Data":"cf7f7bc66c4a11fbbaa4f544126503da0545755bf86530482da0bf246d87bd14"} Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.038207 4872 scope.go:117] "RemoveContainer" containerID="85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.079824 4872 scope.go:117] "RemoveContainer" containerID="8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.089729 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nwkc"] Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.100804 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nwkc"] Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.116238 4872 scope.go:117] "RemoveContainer" containerID="06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.149881 4872 scope.go:117] "RemoveContainer" containerID="85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38" Feb 03 07:00:09 crc kubenswrapper[4872]: E0203 07:00:09.150347 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38\": container with ID starting with 85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38 not found: ID does not exist" containerID="85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.150393 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38"} err="failed to get container status \"85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38\": rpc error: code = NotFound desc = could not find container \"85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38\": container with ID starting with 85d15c1cda83d3446854d234b1396199bf9eac39c38227281af5fed0afe1bb38 not found: ID does not exist" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.150437 4872 scope.go:117] "RemoveContainer" containerID="8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8" Feb 03 07:00:09 crc kubenswrapper[4872]: E0203 07:00:09.150897 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8\": container with ID starting with 8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8 not found: ID does not exist" containerID="8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.150926 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8"} err="failed to get container status \"8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8\": rpc error: code = NotFound desc = could not find container \"8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8\": container with ID starting with 8a3b4326414db18f9ccc98b603c6a6165f4809f3b7b4dd8c63d13ce7ed36f1c8 not found: ID does not exist" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.150947 4872 scope.go:117] "RemoveContainer" containerID="06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd" Feb 03 07:00:09 crc kubenswrapper[4872]: E0203 07:00:09.151145 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd\": container with ID starting with 06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd not found: ID does not exist" containerID="06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd" Feb 03 07:00:09 crc kubenswrapper[4872]: I0203 07:00:09.151169 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd"} err="failed to get container status \"06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd\": rpc error: code = NotFound desc = could not find container \"06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd\": container with ID starting with 06d935fe6a2a6da26fdc1e3df0c2c49dcfb8b83219b29e6f33636da7d2e0c0bd not found: ID does not exist" Feb 03 07:00:10 crc kubenswrapper[4872]: I0203 07:00:10.133896 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" path="/var/lib/kubelet/pods/685beaa3-0b95-49ae-a376-bcc4f4dd717c/volumes" Feb 03 07:00:19 crc kubenswrapper[4872]: I0203 07:00:19.123163 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:00:19 crc kubenswrapper[4872]: E0203 07:00:19.123953 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:00:32 crc kubenswrapper[4872]: I0203 07:00:32.122605 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:00:32 crc kubenswrapper[4872]: E0203 07:00:32.123324 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:00:46 crc kubenswrapper[4872]: I0203 07:00:46.168723 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:00:46 crc kubenswrapper[4872]: E0203 07:00:46.169313 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:00:57 crc kubenswrapper[4872]: I0203 07:00:57.122887 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:00:57 crc kubenswrapper[4872]: E0203 07:00:57.123737 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:00:58 crc kubenswrapper[4872]: I0203 07:00:58.853541 4872 scope.go:117] "RemoveContainer" containerID="cc48c9f4d8e4b659b49b03e5947fb6b4b03d1d354023d3aca6759aa2a97696d4" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.164768 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29501701-6l75v"] Feb 03 07:01:00 crc kubenswrapper[4872]: E0203 07:01:00.167722 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerName="registry-server" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.167759 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerName="registry-server" Feb 03 07:01:00 crc kubenswrapper[4872]: E0203 07:01:00.167780 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerName="extract-content" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.167787 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerName="extract-content" Feb 03 07:01:00 crc kubenswrapper[4872]: E0203 07:01:00.167798 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb97a14f-3aab-4f43-880f-e849888880c6" containerName="collect-profiles" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.167807 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb97a14f-3aab-4f43-880f-e849888880c6" containerName="collect-profiles" Feb 03 07:01:00 crc kubenswrapper[4872]: E0203 07:01:00.167822 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerName="extract-utilities" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.167829 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerName="extract-utilities" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.168054 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="685beaa3-0b95-49ae-a376-bcc4f4dd717c" containerName="registry-server" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.168074 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb97a14f-3aab-4f43-880f-e849888880c6" containerName="collect-profiles" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.168820 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.235040 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29501701-6l75v"] Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.339167 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-fernet-keys\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.339270 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-config-data\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.339443 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-combined-ca-bundle\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.339469 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vxp\" (UniqueName: \"kubernetes.io/projected/1214a97d-f94a-40bf-88ea-0310ff11684d-kube-api-access-75vxp\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.441292 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-combined-ca-bundle\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.441347 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vxp\" (UniqueName: \"kubernetes.io/projected/1214a97d-f94a-40bf-88ea-0310ff11684d-kube-api-access-75vxp\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.441473 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-fernet-keys\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.441523 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-config-data\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.448896 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-combined-ca-bundle\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.449205 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-config-data\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.451201 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-fernet-keys\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.462563 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vxp\" (UniqueName: \"kubernetes.io/projected/1214a97d-f94a-40bf-88ea-0310ff11684d-kube-api-access-75vxp\") pod \"keystone-cron-29501701-6l75v\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:00 crc kubenswrapper[4872]: I0203 07:01:00.519462 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:01 crc kubenswrapper[4872]: I0203 07:01:01.025292 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29501701-6l75v"] Feb 03 07:01:01 crc kubenswrapper[4872]: I0203 07:01:01.551725 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501701-6l75v" event={"ID":"1214a97d-f94a-40bf-88ea-0310ff11684d","Type":"ContainerStarted","Data":"f919ed7f72f0cd6c1c84e9112e1092d9c7117aab44c24a2038c2f123b3d844be"} Feb 03 07:01:01 crc kubenswrapper[4872]: I0203 07:01:01.552034 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501701-6l75v" event={"ID":"1214a97d-f94a-40bf-88ea-0310ff11684d","Type":"ContainerStarted","Data":"acdf1784e6f6ffc36017d60cd44782a3979b327ca11074eeee997e37833c48f3"} Feb 03 07:01:01 crc kubenswrapper[4872]: I0203 07:01:01.574070 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29501701-6l75v" podStartSLOduration=1.5740535580000001 podStartE2EDuration="1.574053558s" podCreationTimestamp="2026-02-03 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 07:01:01.572198863 +0000 UTC m=+3632.154890287" watchObservedRunningTime="2026-02-03 07:01:01.574053558 +0000 UTC m=+3632.156744972" Feb 03 07:01:08 crc kubenswrapper[4872]: I0203 07:01:08.612557 4872 generic.go:334] "Generic (PLEG): container finished" podID="1214a97d-f94a-40bf-88ea-0310ff11684d" containerID="f919ed7f72f0cd6c1c84e9112e1092d9c7117aab44c24a2038c2f123b3d844be" exitCode=0 Feb 03 07:01:08 crc kubenswrapper[4872]: I0203 07:01:08.613239 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501701-6l75v" event={"ID":"1214a97d-f94a-40bf-88ea-0310ff11684d","Type":"ContainerDied","Data":"f919ed7f72f0cd6c1c84e9112e1092d9c7117aab44c24a2038c2f123b3d844be"} Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.198279 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.343451 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vxp\" (UniqueName: \"kubernetes.io/projected/1214a97d-f94a-40bf-88ea-0310ff11684d-kube-api-access-75vxp\") pod \"1214a97d-f94a-40bf-88ea-0310ff11684d\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.343550 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-config-data\") pod \"1214a97d-f94a-40bf-88ea-0310ff11684d\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.343955 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-combined-ca-bundle\") pod \"1214a97d-f94a-40bf-88ea-0310ff11684d\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.344038 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-fernet-keys\") pod \"1214a97d-f94a-40bf-88ea-0310ff11684d\" (UID: \"1214a97d-f94a-40bf-88ea-0310ff11684d\") " Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.349805 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1214a97d-f94a-40bf-88ea-0310ff11684d" (UID: "1214a97d-f94a-40bf-88ea-0310ff11684d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.376171 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1214a97d-f94a-40bf-88ea-0310ff11684d-kube-api-access-75vxp" (OuterVolumeSpecName: "kube-api-access-75vxp") pod "1214a97d-f94a-40bf-88ea-0310ff11684d" (UID: "1214a97d-f94a-40bf-88ea-0310ff11684d"). InnerVolumeSpecName "kube-api-access-75vxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.397944 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1214a97d-f94a-40bf-88ea-0310ff11684d" (UID: "1214a97d-f94a-40bf-88ea-0310ff11684d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.414440 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-config-data" (OuterVolumeSpecName: "config-data") pod "1214a97d-f94a-40bf-88ea-0310ff11684d" (UID: "1214a97d-f94a-40bf-88ea-0310ff11684d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.446408 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vxp\" (UniqueName: \"kubernetes.io/projected/1214a97d-f94a-40bf-88ea-0310ff11684d-kube-api-access-75vxp\") on node \"crc\" DevicePath \"\"" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.446450 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.446462 4872 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.446477 4872 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1214a97d-f94a-40bf-88ea-0310ff11684d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.633437 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29501701-6l75v" event={"ID":"1214a97d-f94a-40bf-88ea-0310ff11684d","Type":"ContainerDied","Data":"acdf1784e6f6ffc36017d60cd44782a3979b327ca11074eeee997e37833c48f3"} Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.633805 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acdf1784e6f6ffc36017d60cd44782a3979b327ca11074eeee997e37833c48f3" Feb 03 07:01:10 crc kubenswrapper[4872]: I0203 07:01:10.633582 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29501701-6l75v" Feb 03 07:01:12 crc kubenswrapper[4872]: I0203 07:01:12.123040 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:01:12 crc kubenswrapper[4872]: E0203 07:01:12.123938 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:01:26 crc kubenswrapper[4872]: I0203 07:01:26.123778 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:01:26 crc kubenswrapper[4872]: E0203 07:01:26.124785 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:01:37 crc kubenswrapper[4872]: I0203 07:01:37.123659 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:01:37 crc kubenswrapper[4872]: E0203 07:01:37.124384 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:01:49 crc kubenswrapper[4872]: I0203 07:01:49.123254 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:01:49 crc kubenswrapper[4872]: E0203 07:01:49.124096 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:02:03 crc kubenswrapper[4872]: I0203 07:02:03.123612 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:02:03 crc kubenswrapper[4872]: E0203 07:02:03.125323 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:02:18 crc kubenswrapper[4872]: I0203 07:02:18.123287 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:02:18 crc kubenswrapper[4872]: E0203 07:02:18.124023 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:02:31 crc kubenswrapper[4872]: I0203 07:02:31.122819 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:02:31 crc kubenswrapper[4872]: E0203 07:02:31.123658 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:02:43 crc kubenswrapper[4872]: I0203 07:02:43.690837 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-f5458fb75-k8gpr" podUID="7869dbc8-d72a-47cf-8547-40b91024653f" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 03 07:02:46 crc kubenswrapper[4872]: I0203 07:02:46.123083 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:02:46 crc kubenswrapper[4872]: E0203 07:02:46.123996 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:03:01 crc kubenswrapper[4872]: I0203 07:03:01.122467 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:03:01 crc kubenswrapper[4872]: E0203 07:03:01.123148 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:03:13 crc kubenswrapper[4872]: I0203 07:03:13.123034 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:03:13 crc kubenswrapper[4872]: E0203 07:03:13.123808 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:03:25 crc kubenswrapper[4872]: I0203 07:03:25.122569 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:03:25 crc kubenswrapper[4872]: E0203 07:03:25.123401 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:03:40 crc kubenswrapper[4872]: I0203 07:03:40.129389 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:03:40 crc kubenswrapper[4872]: E0203 07:03:40.130209 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:03:55 crc kubenswrapper[4872]: I0203 07:03:55.123080 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:03:55 crc kubenswrapper[4872]: E0203 07:03:55.123805 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:04:07 crc kubenswrapper[4872]: I0203 07:04:07.122843 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:04:07 crc kubenswrapper[4872]: E0203 07:04:07.123615 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:04:20 crc kubenswrapper[4872]: I0203 07:04:20.138663 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:04:20 crc kubenswrapper[4872]: E0203 07:04:20.139851 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:04:34 crc kubenswrapper[4872]: I0203 07:04:34.122908 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:04:35 crc kubenswrapper[4872]: I0203 07:04:35.432708 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"df6fc6a27a71f15957fc4adf158004172fef9aa3018164e62164ec2c8b39d146"} Feb 03 07:07:01 crc kubenswrapper[4872]: I0203 07:07:01.271338 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:07:01 crc kubenswrapper[4872]: I0203 07:07:01.271960 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:07:31 crc kubenswrapper[4872]: I0203 07:07:31.272137 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:07:31 crc kubenswrapper[4872]: I0203 07:07:31.272725 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.483711 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gbxgf"] Feb 03 07:07:36 crc kubenswrapper[4872]: E0203 07:07:36.485564 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1214a97d-f94a-40bf-88ea-0310ff11684d" containerName="keystone-cron" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.485896 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="1214a97d-f94a-40bf-88ea-0310ff11684d" containerName="keystone-cron" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.486179 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="1214a97d-f94a-40bf-88ea-0310ff11684d" containerName="keystone-cron" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.490581 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.503756 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gbxgf"] Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.652328 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-catalog-content\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.652384 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-utilities\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.652448 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4jg\" (UniqueName: \"kubernetes.io/projected/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-kube-api-access-vn4jg\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.754270 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn4jg\" (UniqueName: \"kubernetes.io/projected/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-kube-api-access-vn4jg\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.754435 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-catalog-content\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.754456 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-utilities\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.754884 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-utilities\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.755429 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-catalog-content\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.781952 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn4jg\" (UniqueName: \"kubernetes.io/projected/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-kube-api-access-vn4jg\") pod \"community-operators-gbxgf\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:36 crc kubenswrapper[4872]: I0203 07:07:36.817637 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:37 crc kubenswrapper[4872]: I0203 07:07:37.444131 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gbxgf"] Feb 03 07:07:37 crc kubenswrapper[4872]: I0203 07:07:37.553503 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbxgf" event={"ID":"eeebb37b-8b7c-4eb8-b14c-5cde383bac59","Type":"ContainerStarted","Data":"d7f9470390f8d894d4b08c616565f31e1c8aca42b0865de5dab247300ff7f108"} Feb 03 07:07:38 crc kubenswrapper[4872]: I0203 07:07:38.562632 4872 generic.go:334] "Generic (PLEG): container finished" podID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerID="ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e" exitCode=0 Feb 03 07:07:38 crc kubenswrapper[4872]: I0203 07:07:38.562706 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbxgf" event={"ID":"eeebb37b-8b7c-4eb8-b14c-5cde383bac59","Type":"ContainerDied","Data":"ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e"} Feb 03 07:07:38 crc kubenswrapper[4872]: I0203 07:07:38.566817 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 07:07:39 crc kubenswrapper[4872]: I0203 07:07:39.573957 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbxgf" event={"ID":"eeebb37b-8b7c-4eb8-b14c-5cde383bac59","Type":"ContainerStarted","Data":"a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479"} Feb 03 07:07:43 crc kubenswrapper[4872]: I0203 07:07:43.615239 4872 generic.go:334] "Generic (PLEG): container finished" podID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerID="a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479" exitCode=0 Feb 03 07:07:43 crc kubenswrapper[4872]: I0203 07:07:43.615300 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbxgf" event={"ID":"eeebb37b-8b7c-4eb8-b14c-5cde383bac59","Type":"ContainerDied","Data":"a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479"} Feb 03 07:07:44 crc kubenswrapper[4872]: I0203 07:07:44.627148 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbxgf" event={"ID":"eeebb37b-8b7c-4eb8-b14c-5cde383bac59","Type":"ContainerStarted","Data":"7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be"} Feb 03 07:07:44 crc kubenswrapper[4872]: I0203 07:07:44.661444 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gbxgf" podStartSLOduration=3.171867274 podStartE2EDuration="8.661411255s" podCreationTimestamp="2026-02-03 07:07:36 +0000 UTC" firstStartedPulling="2026-02-03 07:07:38.566537076 +0000 UTC m=+4029.149228490" lastFinishedPulling="2026-02-03 07:07:44.056081057 +0000 UTC m=+4034.638772471" observedRunningTime="2026-02-03 07:07:44.653055003 +0000 UTC m=+4035.235746417" watchObservedRunningTime="2026-02-03 07:07:44.661411255 +0000 UTC m=+4035.244102669" Feb 03 07:07:45 crc kubenswrapper[4872]: I0203 07:07:45.644748 4872 generic.go:334] "Generic (PLEG): container finished" podID="ab488c2c-7a02-4e73-8aaa-5e0197d51631" containerID="510d481f2cfb71247d9a9be8adccbf186d5592926ea0fe9a7840b8cf27a7805c" exitCode=0 Feb 03 07:07:45 crc kubenswrapper[4872]: I0203 07:07:45.644819 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ab488c2c-7a02-4e73-8aaa-5e0197d51631","Type":"ContainerDied","Data":"510d481f2cfb71247d9a9be8adccbf186d5592926ea0fe9a7840b8cf27a7805c"} Feb 03 07:07:46 crc kubenswrapper[4872]: I0203 07:07:46.826077 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:46 crc kubenswrapper[4872]: I0203 07:07:46.826361 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:46 crc kubenswrapper[4872]: I0203 07:07:46.888264 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.104187 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.255604 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config-secret\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.255750 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.255788 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-temporary\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.255868 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ca-certs\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.255900 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-przqz\" (UniqueName: \"kubernetes.io/projected/ab488c2c-7a02-4e73-8aaa-5e0197d51631-kube-api-access-przqz\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.255939 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-config-data\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.255985 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ssh-key\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.256023 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-workdir\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.256043 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config\") pod \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\" (UID: \"ab488c2c-7a02-4e73-8aaa-5e0197d51631\") " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.260506 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.265120 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-config-data" (OuterVolumeSpecName: "config-data") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.268205 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab488c2c-7a02-4e73-8aaa-5e0197d51631-kube-api-access-przqz" (OuterVolumeSpecName: "kube-api-access-przqz") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "kube-api-access-przqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.271842 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.276218 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.305913 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.320943 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.326352 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.339209 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ab488c2c-7a02-4e73-8aaa-5e0197d51631" (UID: "ab488c2c-7a02-4e73-8aaa-5e0197d51631"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.358068 4872 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.358096 4872 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.358108 4872 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.358119 4872 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.361225 4872 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.361253 4872 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ab488c2c-7a02-4e73-8aaa-5e0197d51631-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.361264 4872 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ab488c2c-7a02-4e73-8aaa-5e0197d51631-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.361276 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-przqz\" (UniqueName: \"kubernetes.io/projected/ab488c2c-7a02-4e73-8aaa-5e0197d51631-kube-api-access-przqz\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.361286 4872 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab488c2c-7a02-4e73-8aaa-5e0197d51631-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.410025 4872 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.462701 4872 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.663644 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ab488c2c-7a02-4e73-8aaa-5e0197d51631","Type":"ContainerDied","Data":"e62e6e3cd2008fde1cc3856823bccdf4685b41d881aec65f5b04f9ce86c8f49b"} Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.663981 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e62e6e3cd2008fde1cc3856823bccdf4685b41d881aec65f5b04f9ce86c8f49b" Feb 03 07:07:47 crc kubenswrapper[4872]: I0203 07:07:47.663732 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.514886 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2nrhq"] Feb 03 07:07:49 crc kubenswrapper[4872]: E0203 07:07:49.515296 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab488c2c-7a02-4e73-8aaa-5e0197d51631" containerName="tempest-tests-tempest-tests-runner" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.515310 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab488c2c-7a02-4e73-8aaa-5e0197d51631" containerName="tempest-tests-tempest-tests-runner" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.515547 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab488c2c-7a02-4e73-8aaa-5e0197d51631" containerName="tempest-tests-tempest-tests-runner" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.517142 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.524895 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nrhq"] Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.605006 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-utilities\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.605137 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-catalog-content\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.605210 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtb4q\" (UniqueName: \"kubernetes.io/projected/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-kube-api-access-dtb4q\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.707353 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-catalog-content\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.707435 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtb4q\" (UniqueName: \"kubernetes.io/projected/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-kube-api-access-dtb4q\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.707537 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-utilities\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.708036 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-utilities\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.708262 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-catalog-content\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.727431 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtb4q\" (UniqueName: \"kubernetes.io/projected/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-kube-api-access-dtb4q\") pod \"redhat-marketplace-2nrhq\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:49 crc kubenswrapper[4872]: I0203 07:07:49.842129 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:50 crc kubenswrapper[4872]: W0203 07:07:50.358374 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9c5bb5_9cda_4498_8f98_3b0bfaab186a.slice/crio-17536bb3cc006d90cff3d1a49c614eacd66d918bbdb46ffcedeef30eae1504bc WatchSource:0}: Error finding container 17536bb3cc006d90cff3d1a49c614eacd66d918bbdb46ffcedeef30eae1504bc: Status 404 returned error can't find the container with id 17536bb3cc006d90cff3d1a49c614eacd66d918bbdb46ffcedeef30eae1504bc Feb 03 07:07:50 crc kubenswrapper[4872]: I0203 07:07:50.370232 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nrhq"] Feb 03 07:07:50 crc kubenswrapper[4872]: I0203 07:07:50.687373 4872 generic.go:334] "Generic (PLEG): container finished" podID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerID="ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2" exitCode=0 Feb 03 07:07:50 crc kubenswrapper[4872]: I0203 07:07:50.687680 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nrhq" event={"ID":"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a","Type":"ContainerDied","Data":"ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2"} Feb 03 07:07:50 crc kubenswrapper[4872]: I0203 07:07:50.687715 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nrhq" event={"ID":"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a","Type":"ContainerStarted","Data":"17536bb3cc006d90cff3d1a49c614eacd66d918bbdb46ffcedeef30eae1504bc"} Feb 03 07:07:51 crc kubenswrapper[4872]: I0203 07:07:51.697550 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nrhq" event={"ID":"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a","Type":"ContainerStarted","Data":"d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d"} Feb 03 07:07:52 crc kubenswrapper[4872]: I0203 07:07:52.707969 4872 generic.go:334] "Generic (PLEG): container finished" podID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerID="d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d" exitCode=0 Feb 03 07:07:52 crc kubenswrapper[4872]: I0203 07:07:52.708036 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nrhq" event={"ID":"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a","Type":"ContainerDied","Data":"d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d"} Feb 03 07:07:54 crc kubenswrapper[4872]: I0203 07:07:54.725626 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nrhq" event={"ID":"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a","Type":"ContainerStarted","Data":"96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335"} Feb 03 07:07:54 crc kubenswrapper[4872]: I0203 07:07:54.747963 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2nrhq" podStartSLOduration=3.172703192 podStartE2EDuration="5.74794193s" podCreationTimestamp="2026-02-03 07:07:49 +0000 UTC" firstStartedPulling="2026-02-03 07:07:50.689231574 +0000 UTC m=+4041.271922988" lastFinishedPulling="2026-02-03 07:07:53.264470312 +0000 UTC m=+4043.847161726" observedRunningTime="2026-02-03 07:07:54.743558033 +0000 UTC m=+4045.326249457" watchObservedRunningTime="2026-02-03 07:07:54.74794193 +0000 UTC m=+4045.330633344" Feb 03 07:07:56 crc kubenswrapper[4872]: I0203 07:07:56.867521 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:56 crc kubenswrapper[4872]: I0203 07:07:56.932137 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gbxgf"] Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.554839 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.556557 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.558634 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-n66vq" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.565878 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.657007 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4f4c\" (UniqueName: \"kubernetes.io/projected/105cad3e-c6c1-4dfa-93dd-9138d760b916-kube-api-access-f4f4c\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"105cad3e-c6c1-4dfa-93dd-9138d760b916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.657144 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"105cad3e-c6c1-4dfa-93dd-9138d760b916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.748569 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gbxgf" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerName="registry-server" containerID="cri-o://7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be" gracePeriod=2 Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.759062 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"105cad3e-c6c1-4dfa-93dd-9138d760b916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.759209 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4f4c\" (UniqueName: \"kubernetes.io/projected/105cad3e-c6c1-4dfa-93dd-9138d760b916-kube-api-access-f4f4c\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"105cad3e-c6c1-4dfa-93dd-9138d760b916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.760278 4872 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"105cad3e-c6c1-4dfa-93dd-9138d760b916\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.790740 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4f4c\" (UniqueName: \"kubernetes.io/projected/105cad3e-c6c1-4dfa-93dd-9138d760b916-kube-api-access-f4f4c\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"105cad3e-c6c1-4dfa-93dd-9138d760b916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.806236 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"105cad3e-c6c1-4dfa-93dd-9138d760b916\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:57 crc kubenswrapper[4872]: I0203 07:07:57.922800 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.448936 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.583277 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn4jg\" (UniqueName: \"kubernetes.io/projected/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-kube-api-access-vn4jg\") pod \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.583358 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-utilities\") pod \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.583441 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-catalog-content\") pod \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\" (UID: \"eeebb37b-8b7c-4eb8-b14c-5cde383bac59\") " Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.586346 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-utilities" (OuterVolumeSpecName: "utilities") pod "eeebb37b-8b7c-4eb8-b14c-5cde383bac59" (UID: "eeebb37b-8b7c-4eb8-b14c-5cde383bac59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.589938 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-kube-api-access-vn4jg" (OuterVolumeSpecName: "kube-api-access-vn4jg") pod "eeebb37b-8b7c-4eb8-b14c-5cde383bac59" (UID: "eeebb37b-8b7c-4eb8-b14c-5cde383bac59"). InnerVolumeSpecName "kube-api-access-vn4jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.642139 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeebb37b-8b7c-4eb8-b14c-5cde383bac59" (UID: "eeebb37b-8b7c-4eb8-b14c-5cde383bac59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.686132 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn4jg\" (UniqueName: \"kubernetes.io/projected/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-kube-api-access-vn4jg\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.686165 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.686178 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeebb37b-8b7c-4eb8-b14c-5cde383bac59-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.722287 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 07:07:58 crc kubenswrapper[4872]: W0203 07:07:58.730891 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod105cad3e_c6c1_4dfa_93dd_9138d760b916.slice/crio-6a5562fc13da76ac5e79bd710bd0c55d53eeedfe002c5cc9ec10d52827fff18a WatchSource:0}: Error finding container 6a5562fc13da76ac5e79bd710bd0c55d53eeedfe002c5cc9ec10d52827fff18a: Status 404 returned error can't find the container with id 6a5562fc13da76ac5e79bd710bd0c55d53eeedfe002c5cc9ec10d52827fff18a Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.763792 4872 generic.go:334] "Generic (PLEG): container finished" podID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerID="7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be" exitCode=0 Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.763877 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbxgf" event={"ID":"eeebb37b-8b7c-4eb8-b14c-5cde383bac59","Type":"ContainerDied","Data":"7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be"} Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.763906 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gbxgf" event={"ID":"eeebb37b-8b7c-4eb8-b14c-5cde383bac59","Type":"ContainerDied","Data":"d7f9470390f8d894d4b08c616565f31e1c8aca42b0865de5dab247300ff7f108"} Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.763943 4872 scope.go:117] "RemoveContainer" containerID="7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.763975 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gbxgf" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.767092 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"105cad3e-c6c1-4dfa-93dd-9138d760b916","Type":"ContainerStarted","Data":"6a5562fc13da76ac5e79bd710bd0c55d53eeedfe002c5cc9ec10d52827fff18a"} Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.800255 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gbxgf"] Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.801651 4872 scope.go:117] "RemoveContainer" containerID="a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.807868 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gbxgf"] Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.854538 4872 scope.go:117] "RemoveContainer" containerID="ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.907859 4872 scope.go:117] "RemoveContainer" containerID="7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be" Feb 03 07:07:58 crc kubenswrapper[4872]: E0203 07:07:58.909095 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be\": container with ID starting with 7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be not found: ID does not exist" containerID="7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.909136 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be"} err="failed to get container status \"7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be\": rpc error: code = NotFound desc = could not find container \"7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be\": container with ID starting with 7908b8e9c274a587e92d08ecfba524fad8a55bd8ddd87d43ae2c0bcc66f188be not found: ID does not exist" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.909159 4872 scope.go:117] "RemoveContainer" containerID="a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479" Feb 03 07:07:58 crc kubenswrapper[4872]: E0203 07:07:58.909513 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479\": container with ID starting with a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479 not found: ID does not exist" containerID="a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.909546 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479"} err="failed to get container status \"a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479\": rpc error: code = NotFound desc = could not find container \"a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479\": container with ID starting with a2c55d5f1b02a04b7a35e2a419c7ebe83024ab80379d7e6407f4913489aa1479 not found: ID does not exist" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.909584 4872 scope.go:117] "RemoveContainer" containerID="ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e" Feb 03 07:07:58 crc kubenswrapper[4872]: E0203 07:07:58.910516 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e\": container with ID starting with ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e not found: ID does not exist" containerID="ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e" Feb 03 07:07:58 crc kubenswrapper[4872]: I0203 07:07:58.910542 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e"} err="failed to get container status \"ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e\": rpc error: code = NotFound desc = could not find container \"ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e\": container with ID starting with ed37e0397193eb0475931770891ce929ebf730d256490e7892ae069ddd59de5e not found: ID does not exist" Feb 03 07:07:59 crc kubenswrapper[4872]: I0203 07:07:59.842369 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:59 crc kubenswrapper[4872]: I0203 07:07:59.842418 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:07:59 crc kubenswrapper[4872]: I0203 07:07:59.885981 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:08:00 crc kubenswrapper[4872]: I0203 07:08:00.135178 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" path="/var/lib/kubelet/pods/eeebb37b-8b7c-4eb8-b14c-5cde383bac59/volumes" Feb 03 07:08:00 crc kubenswrapper[4872]: I0203 07:08:00.804676 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"105cad3e-c6c1-4dfa-93dd-9138d760b916","Type":"ContainerStarted","Data":"4f0fdafbd6c6bb30bd3b4056d7aa9c84e1ee7bad98da8bec5b62fa8b87ca9c48"} Feb 03 07:08:00 crc kubenswrapper[4872]: I0203 07:08:00.825088 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.351064651 podStartE2EDuration="3.82506195s" podCreationTimestamp="2026-02-03 07:07:57 +0000 UTC" firstStartedPulling="2026-02-03 07:07:58.739213206 +0000 UTC m=+4049.321904630" lastFinishedPulling="2026-02-03 07:08:00.213210515 +0000 UTC m=+4050.795901929" observedRunningTime="2026-02-03 07:08:00.819041624 +0000 UTC m=+4051.401733048" watchObservedRunningTime="2026-02-03 07:08:00.82506195 +0000 UTC m=+4051.407753374" Feb 03 07:08:00 crc kubenswrapper[4872]: I0203 07:08:00.857552 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.279707 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.280110 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.280180 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.281347 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df6fc6a27a71f15957fc4adf158004172fef9aa3018164e62164ec2c8b39d146"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.281431 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://df6fc6a27a71f15957fc4adf158004172fef9aa3018164e62164ec2c8b39d146" gracePeriod=600 Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.300894 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nrhq"] Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.813941 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="df6fc6a27a71f15957fc4adf158004172fef9aa3018164e62164ec2c8b39d146" exitCode=0 Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.814247 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"df6fc6a27a71f15957fc4adf158004172fef9aa3018164e62164ec2c8b39d146"} Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.814290 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4"} Feb 03 07:08:01 crc kubenswrapper[4872]: I0203 07:08:01.814306 4872 scope.go:117] "RemoveContainer" containerID="cea5b19958faff9c56a607f07d22dcc9b6b0585fffdf56a1a5ad85274678cede" Feb 03 07:08:02 crc kubenswrapper[4872]: I0203 07:08:02.823617 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2nrhq" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerName="registry-server" containerID="cri-o://96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335" gracePeriod=2 Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.305121 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.376673 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-utilities\") pod \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.376820 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-catalog-content\") pod \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.376975 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtb4q\" (UniqueName: \"kubernetes.io/projected/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-kube-api-access-dtb4q\") pod \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\" (UID: \"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a\") " Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.377032 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-utilities" (OuterVolumeSpecName: "utilities") pod "ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" (UID: "ec9c5bb5-9cda-4498-8f98-3b0bfaab186a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.377619 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.386075 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-kube-api-access-dtb4q" (OuterVolumeSpecName: "kube-api-access-dtb4q") pod "ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" (UID: "ec9c5bb5-9cda-4498-8f98-3b0bfaab186a"). InnerVolumeSpecName "kube-api-access-dtb4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.405480 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" (UID: "ec9c5bb5-9cda-4498-8f98-3b0bfaab186a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.479600 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.479663 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtb4q\" (UniqueName: \"kubernetes.io/projected/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a-kube-api-access-dtb4q\") on node \"crc\" DevicePath \"\"" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.833483 4872 generic.go:334] "Generic (PLEG): container finished" podID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerID="96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335" exitCode=0 Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.833523 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nrhq" event={"ID":"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a","Type":"ContainerDied","Data":"96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335"} Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.833538 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2nrhq" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.834842 4872 scope.go:117] "RemoveContainer" containerID="96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.834758 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2nrhq" event={"ID":"ec9c5bb5-9cda-4498-8f98-3b0bfaab186a","Type":"ContainerDied","Data":"17536bb3cc006d90cff3d1a49c614eacd66d918bbdb46ffcedeef30eae1504bc"} Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.866350 4872 scope.go:117] "RemoveContainer" containerID="d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d" Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.873263 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nrhq"] Feb 03 07:08:03 crc kubenswrapper[4872]: I0203 07:08:03.882241 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2nrhq"] Feb 03 07:08:04 crc kubenswrapper[4872]: I0203 07:08:04.071130 4872 scope.go:117] "RemoveContainer" containerID="ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2" Feb 03 07:08:04 crc kubenswrapper[4872]: I0203 07:08:04.134070 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" path="/var/lib/kubelet/pods/ec9c5bb5-9cda-4498-8f98-3b0bfaab186a/volumes" Feb 03 07:08:04 crc kubenswrapper[4872]: I0203 07:08:04.229211 4872 scope.go:117] "RemoveContainer" containerID="96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335" Feb 03 07:08:04 crc kubenswrapper[4872]: E0203 07:08:04.230163 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335\": container with ID starting with 96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335 not found: ID does not exist" containerID="96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335" Feb 03 07:08:04 crc kubenswrapper[4872]: I0203 07:08:04.230222 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335"} err="failed to get container status \"96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335\": rpc error: code = NotFound desc = could not find container \"96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335\": container with ID starting with 96988691ea582695282ea100075544fb272d2f6e950bb2ba32fb624fca5ae335 not found: ID does not exist" Feb 03 07:08:04 crc kubenswrapper[4872]: I0203 07:08:04.230268 4872 scope.go:117] "RemoveContainer" containerID="d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d" Feb 03 07:08:04 crc kubenswrapper[4872]: E0203 07:08:04.230923 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d\": container with ID starting with d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d not found: ID does not exist" containerID="d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d" Feb 03 07:08:04 crc kubenswrapper[4872]: I0203 07:08:04.230978 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d"} err="failed to get container status \"d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d\": rpc error: code = NotFound desc = could not find container \"d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d\": container with ID starting with d4ae23edf20435b0102893bf9267ca00592ecd7058306e76c2c32b027f9a278d not found: ID does not exist" Feb 03 07:08:04 crc kubenswrapper[4872]: I0203 07:08:04.231011 4872 scope.go:117] "RemoveContainer" containerID="ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2" Feb 03 07:08:04 crc kubenswrapper[4872]: E0203 07:08:04.231347 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2\": container with ID starting with ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2 not found: ID does not exist" containerID="ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2" Feb 03 07:08:04 crc kubenswrapper[4872]: I0203 07:08:04.231383 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2"} err="failed to get container status \"ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2\": rpc error: code = NotFound desc = could not find container \"ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2\": container with ID starting with ee9023d85cdcaefb24b26f5ef8e86a727eda0b85ba7e89cac5171639cf637bb2 not found: ID does not exist" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.679022 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fm59f"] Feb 03 07:08:07 crc kubenswrapper[4872]: E0203 07:08:07.680062 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerName="registry-server" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.680076 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerName="registry-server" Feb 03 07:08:07 crc kubenswrapper[4872]: E0203 07:08:07.680097 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerName="registry-server" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.680105 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerName="registry-server" Feb 03 07:08:07 crc kubenswrapper[4872]: E0203 07:08:07.680130 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerName="extract-utilities" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.680140 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerName="extract-utilities" Feb 03 07:08:07 crc kubenswrapper[4872]: E0203 07:08:07.680151 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerName="extract-content" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.680158 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerName="extract-content" Feb 03 07:08:07 crc kubenswrapper[4872]: E0203 07:08:07.680171 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerName="extract-content" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.680180 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerName="extract-content" Feb 03 07:08:07 crc kubenswrapper[4872]: E0203 07:08:07.680212 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerName="extract-utilities" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.680220 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerName="extract-utilities" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.680438 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9c5bb5-9cda-4498-8f98-3b0bfaab186a" containerName="registry-server" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.680470 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeebb37b-8b7c-4eb8-b14c-5cde383bac59" containerName="registry-server" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.682043 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.699451 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm59f"] Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.758357 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-utilities\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.758591 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-catalog-content\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.758676 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2p92\" (UniqueName: \"kubernetes.io/projected/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-kube-api-access-t2p92\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.861454 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-utilities\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.861594 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-catalog-content\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.861656 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2p92\" (UniqueName: \"kubernetes.io/projected/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-kube-api-access-t2p92\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.862476 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-utilities\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.863099 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-catalog-content\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:07 crc kubenswrapper[4872]: I0203 07:08:07.884783 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2p92\" (UniqueName: \"kubernetes.io/projected/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-kube-api-access-t2p92\") pod \"redhat-operators-fm59f\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:08 crc kubenswrapper[4872]: I0203 07:08:08.007540 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:09 crc kubenswrapper[4872]: I0203 07:08:09.114711 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm59f"] Feb 03 07:08:09 crc kubenswrapper[4872]: I0203 07:08:09.893639 4872 generic.go:334] "Generic (PLEG): container finished" podID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerID="65531830dfcf0527408f3541b1ea2129a2d06ceab4ad87597357cd6abf97f1cb" exitCode=0 Feb 03 07:08:09 crc kubenswrapper[4872]: I0203 07:08:09.893777 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm59f" event={"ID":"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c","Type":"ContainerDied","Data":"65531830dfcf0527408f3541b1ea2129a2d06ceab4ad87597357cd6abf97f1cb"} Feb 03 07:08:09 crc kubenswrapper[4872]: I0203 07:08:09.893862 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm59f" event={"ID":"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c","Type":"ContainerStarted","Data":"e646a683c4513eeae86ae2ce1f92ff849674f5ef5b3b6161433348b3e4715793"} Feb 03 07:08:10 crc kubenswrapper[4872]: I0203 07:08:10.905730 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm59f" event={"ID":"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c","Type":"ContainerStarted","Data":"e6154728498c48b61399998a0bb1d64e55dd3bc2c55d219cea3350574871d8fa"} Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.012900 4872 generic.go:334] "Generic (PLEG): container finished" podID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerID="e6154728498c48b61399998a0bb1d64e55dd3bc2c55d219cea3350574871d8fa" exitCode=0 Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.012934 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm59f" event={"ID":"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c","Type":"ContainerDied","Data":"e6154728498c48b61399998a0bb1d64e55dd3bc2c55d219cea3350574871d8fa"} Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.051348 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6qslc/must-gather-29slc"] Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.053390 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.055786 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6qslc"/"default-dockercfg-mb5z6" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.056667 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6qslc"/"openshift-service-ca.crt" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.064547 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6qslc"/"kube-root-ca.crt" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.123384 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6qslc/must-gather-29slc"] Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.163391 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9aec36b3-fc51-4b2e-9529-b55afc974191-must-gather-output\") pod \"must-gather-29slc\" (UID: \"9aec36b3-fc51-4b2e-9529-b55afc974191\") " pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.163481 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4l2n\" (UniqueName: \"kubernetes.io/projected/9aec36b3-fc51-4b2e-9529-b55afc974191-kube-api-access-h4l2n\") pod \"must-gather-29slc\" (UID: \"9aec36b3-fc51-4b2e-9529-b55afc974191\") " pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.265561 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9aec36b3-fc51-4b2e-9529-b55afc974191-must-gather-output\") pod \"must-gather-29slc\" (UID: \"9aec36b3-fc51-4b2e-9529-b55afc974191\") " pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.265932 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4l2n\" (UniqueName: \"kubernetes.io/projected/9aec36b3-fc51-4b2e-9529-b55afc974191-kube-api-access-h4l2n\") pod \"must-gather-29slc\" (UID: \"9aec36b3-fc51-4b2e-9529-b55afc974191\") " pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.266084 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9aec36b3-fc51-4b2e-9529-b55afc974191-must-gather-output\") pod \"must-gather-29slc\" (UID: \"9aec36b3-fc51-4b2e-9529-b55afc974191\") " pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.288539 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4l2n\" (UniqueName: \"kubernetes.io/projected/9aec36b3-fc51-4b2e-9529-b55afc974191-kube-api-access-h4l2n\") pod \"must-gather-29slc\" (UID: \"9aec36b3-fc51-4b2e-9529-b55afc974191\") " pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.371776 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:08:22 crc kubenswrapper[4872]: I0203 07:08:22.903674 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6qslc/must-gather-29slc"] Feb 03 07:08:22 crc kubenswrapper[4872]: W0203 07:08:22.915020 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9aec36b3_fc51_4b2e_9529_b55afc974191.slice/crio-eceb871f1b18f0b54a832a597f919dc1f2e057744aa9a2536713a411a222f886 WatchSource:0}: Error finding container eceb871f1b18f0b54a832a597f919dc1f2e057744aa9a2536713a411a222f886: Status 404 returned error can't find the container with id eceb871f1b18f0b54a832a597f919dc1f2e057744aa9a2536713a411a222f886 Feb 03 07:08:23 crc kubenswrapper[4872]: I0203 07:08:23.022827 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/must-gather-29slc" event={"ID":"9aec36b3-fc51-4b2e-9529-b55afc974191","Type":"ContainerStarted","Data":"eceb871f1b18f0b54a832a597f919dc1f2e057744aa9a2536713a411a222f886"} Feb 03 07:08:23 crc kubenswrapper[4872]: I0203 07:08:23.024977 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm59f" event={"ID":"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c","Type":"ContainerStarted","Data":"3c49067f0d3ef15e2a19746e5d3d3f4e4b321511929443d6a48e5851f9fe0808"} Feb 03 07:08:23 crc kubenswrapper[4872]: I0203 07:08:23.045030 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fm59f" podStartSLOduration=3.24384107 podStartE2EDuration="16.045010177s" podCreationTimestamp="2026-02-03 07:08:07 +0000 UTC" firstStartedPulling="2026-02-03 07:08:09.895502952 +0000 UTC m=+4060.478194386" lastFinishedPulling="2026-02-03 07:08:22.696672079 +0000 UTC m=+4073.279363493" observedRunningTime="2026-02-03 07:08:23.044196287 +0000 UTC m=+4073.626887721" watchObservedRunningTime="2026-02-03 07:08:23.045010177 +0000 UTC m=+4073.627701591" Feb 03 07:08:24 crc kubenswrapper[4872]: I0203 07:08:24.655227 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5a46bed9-4154-4a62-8805-fe67c55a2d89" containerName="galera" probeResult="failure" output="command timed out" Feb 03 07:08:28 crc kubenswrapper[4872]: I0203 07:08:28.008493 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:28 crc kubenswrapper[4872]: I0203 07:08:28.010374 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:08:29 crc kubenswrapper[4872]: I0203 07:08:29.069073 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm59f" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" probeResult="failure" output=< Feb 03 07:08:29 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:08:29 crc kubenswrapper[4872]: > Feb 03 07:08:30 crc kubenswrapper[4872]: I0203 07:08:30.096617 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/must-gather-29slc" event={"ID":"9aec36b3-fc51-4b2e-9529-b55afc974191","Type":"ContainerStarted","Data":"a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795"} Feb 03 07:08:30 crc kubenswrapper[4872]: I0203 07:08:30.096666 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/must-gather-29slc" event={"ID":"9aec36b3-fc51-4b2e-9529-b55afc974191","Type":"ContainerStarted","Data":"a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f"} Feb 03 07:08:30 crc kubenswrapper[4872]: I0203 07:08:30.119038 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6qslc/must-gather-29slc" podStartSLOduration=2.253370336 podStartE2EDuration="8.119011246s" podCreationTimestamp="2026-02-03 07:08:22 +0000 UTC" firstStartedPulling="2026-02-03 07:08:22.919957945 +0000 UTC m=+4073.502649359" lastFinishedPulling="2026-02-03 07:08:28.785598855 +0000 UTC m=+4079.368290269" observedRunningTime="2026-02-03 07:08:30.111455774 +0000 UTC m=+4080.694147198" watchObservedRunningTime="2026-02-03 07:08:30.119011246 +0000 UTC m=+4080.701702660" Feb 03 07:08:34 crc kubenswrapper[4872]: I0203 07:08:34.653217 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5a46bed9-4154-4a62-8805-fe67c55a2d89" containerName="galera" probeResult="failure" output="command timed out" Feb 03 07:08:34 crc kubenswrapper[4872]: I0203 07:08:34.653463 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="5a46bed9-4154-4a62-8805-fe67c55a2d89" containerName="galera" probeResult="failure" output="command timed out" Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.485462 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6qslc/crc-debug-s7bdg"] Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.486828 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.542206 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4pzm\" (UniqueName: \"kubernetes.io/projected/becabfe2-053a-44ba-888c-8e32f14ae4da-kube-api-access-m4pzm\") pod \"crc-debug-s7bdg\" (UID: \"becabfe2-053a-44ba-888c-8e32f14ae4da\") " pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.542665 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becabfe2-053a-44ba-888c-8e32f14ae4da-host\") pod \"crc-debug-s7bdg\" (UID: \"becabfe2-053a-44ba-888c-8e32f14ae4da\") " pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.645944 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becabfe2-053a-44ba-888c-8e32f14ae4da-host\") pod \"crc-debug-s7bdg\" (UID: \"becabfe2-053a-44ba-888c-8e32f14ae4da\") " pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.646046 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4pzm\" (UniqueName: \"kubernetes.io/projected/becabfe2-053a-44ba-888c-8e32f14ae4da-kube-api-access-m4pzm\") pod \"crc-debug-s7bdg\" (UID: \"becabfe2-053a-44ba-888c-8e32f14ae4da\") " pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.646142 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becabfe2-053a-44ba-888c-8e32f14ae4da-host\") pod \"crc-debug-s7bdg\" (UID: \"becabfe2-053a-44ba-888c-8e32f14ae4da\") " pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.687525 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4pzm\" (UniqueName: \"kubernetes.io/projected/becabfe2-053a-44ba-888c-8e32f14ae4da-kube-api-access-m4pzm\") pod \"crc-debug-s7bdg\" (UID: \"becabfe2-053a-44ba-888c-8e32f14ae4da\") " pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:08:35 crc kubenswrapper[4872]: I0203 07:08:35.810410 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:08:35 crc kubenswrapper[4872]: W0203 07:08:35.841978 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbecabfe2_053a_44ba_888c_8e32f14ae4da.slice/crio-e303d9784258d7d4f604134698bbac9e7ad6e69aad2ff88b3b126fd4bf1f4c07 WatchSource:0}: Error finding container e303d9784258d7d4f604134698bbac9e7ad6e69aad2ff88b3b126fd4bf1f4c07: Status 404 returned error can't find the container with id e303d9784258d7d4f604134698bbac9e7ad6e69aad2ff88b3b126fd4bf1f4c07 Feb 03 07:08:36 crc kubenswrapper[4872]: I0203 07:08:36.184271 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/crc-debug-s7bdg" event={"ID":"becabfe2-053a-44ba-888c-8e32f14ae4da","Type":"ContainerStarted","Data":"e303d9784258d7d4f604134698bbac9e7ad6e69aad2ff88b3b126fd4bf1f4c07"} Feb 03 07:08:39 crc kubenswrapper[4872]: I0203 07:08:39.071973 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm59f" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" probeResult="failure" output=< Feb 03 07:08:39 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:08:39 crc kubenswrapper[4872]: > Feb 03 07:08:48 crc kubenswrapper[4872]: I0203 07:08:48.323034 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/crc-debug-s7bdg" event={"ID":"becabfe2-053a-44ba-888c-8e32f14ae4da","Type":"ContainerStarted","Data":"c1e0d1410d252555d0eebb8b887248d9ef62ff9b9273bdbd244078b2c249143d"} Feb 03 07:08:48 crc kubenswrapper[4872]: I0203 07:08:48.345906 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6qslc/crc-debug-s7bdg" podStartSLOduration=1.3657406650000001 podStartE2EDuration="13.345884317s" podCreationTimestamp="2026-02-03 07:08:35 +0000 UTC" firstStartedPulling="2026-02-03 07:08:35.844976901 +0000 UTC m=+4086.427668315" lastFinishedPulling="2026-02-03 07:08:47.825120553 +0000 UTC m=+4098.407811967" observedRunningTime="2026-02-03 07:08:48.339518483 +0000 UTC m=+4098.922209907" watchObservedRunningTime="2026-02-03 07:08:48.345884317 +0000 UTC m=+4098.928575741" Feb 03 07:08:49 crc kubenswrapper[4872]: I0203 07:08:49.070303 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm59f" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" probeResult="failure" output=< Feb 03 07:08:49 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:08:49 crc kubenswrapper[4872]: > Feb 03 07:08:59 crc kubenswrapper[4872]: I0203 07:08:59.056017 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm59f" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" probeResult="failure" output=< Feb 03 07:08:59 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:08:59 crc kubenswrapper[4872]: > Feb 03 07:09:09 crc kubenswrapper[4872]: I0203 07:09:09.067296 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm59f" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" probeResult="failure" output=< Feb 03 07:09:09 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:09:09 crc kubenswrapper[4872]: > Feb 03 07:09:19 crc kubenswrapper[4872]: I0203 07:09:19.063903 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm59f" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" probeResult="failure" output=< Feb 03 07:09:19 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:09:19 crc kubenswrapper[4872]: > Feb 03 07:09:29 crc kubenswrapper[4872]: I0203 07:09:29.051608 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm59f" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" probeResult="failure" output=< Feb 03 07:09:29 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:09:29 crc kubenswrapper[4872]: > Feb 03 07:09:38 crc kubenswrapper[4872]: I0203 07:09:38.061223 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:09:38 crc kubenswrapper[4872]: I0203 07:09:38.113996 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:09:38 crc kubenswrapper[4872]: I0203 07:09:38.946833 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm59f"] Feb 03 07:09:39 crc kubenswrapper[4872]: I0203 07:09:39.937950 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fm59f" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" containerID="cri-o://3c49067f0d3ef15e2a19746e5d3d3f4e4b321511929443d6a48e5851f9fe0808" gracePeriod=2 Feb 03 07:09:40 crc kubenswrapper[4872]: I0203 07:09:40.954719 4872 generic.go:334] "Generic (PLEG): container finished" podID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerID="3c49067f0d3ef15e2a19746e5d3d3f4e4b321511929443d6a48e5851f9fe0808" exitCode=0 Feb 03 07:09:40 crc kubenswrapper[4872]: I0203 07:09:40.956277 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm59f" event={"ID":"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c","Type":"ContainerDied","Data":"3c49067f0d3ef15e2a19746e5d3d3f4e4b321511929443d6a48e5851f9fe0808"} Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.100316 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.178471 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-catalog-content\") pod \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.178663 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-utilities\") pod \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.178894 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2p92\" (UniqueName: \"kubernetes.io/projected/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-kube-api-access-t2p92\") pod \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\" (UID: \"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c\") " Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.180940 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-utilities" (OuterVolumeSpecName: "utilities") pod "d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" (UID: "d87ee4a5-5647-4a12-9d9b-c1c07be8b33c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.187519 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-kube-api-access-t2p92" (OuterVolumeSpecName: "kube-api-access-t2p92") pod "d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" (UID: "d87ee4a5-5647-4a12-9d9b-c1c07be8b33c"). InnerVolumeSpecName "kube-api-access-t2p92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.281950 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.282148 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2p92\" (UniqueName: \"kubernetes.io/projected/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-kube-api-access-t2p92\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.331401 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" (UID: "d87ee4a5-5647-4a12-9d9b-c1c07be8b33c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.384401 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.967990 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm59f" event={"ID":"d87ee4a5-5647-4a12-9d9b-c1c07be8b33c","Type":"ContainerDied","Data":"e646a683c4513eeae86ae2ce1f92ff849674f5ef5b3b6161433348b3e4715793"} Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.968307 4872 scope.go:117] "RemoveContainer" containerID="3c49067f0d3ef15e2a19746e5d3d3f4e4b321511929443d6a48e5851f9fe0808" Feb 03 07:09:41 crc kubenswrapper[4872]: I0203 07:09:41.968096 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm59f" Feb 03 07:09:42 crc kubenswrapper[4872]: I0203 07:09:41.999822 4872 scope.go:117] "RemoveContainer" containerID="e6154728498c48b61399998a0bb1d64e55dd3bc2c55d219cea3350574871d8fa" Feb 03 07:09:42 crc kubenswrapper[4872]: I0203 07:09:42.033503 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm59f"] Feb 03 07:09:42 crc kubenswrapper[4872]: I0203 07:09:42.047228 4872 scope.go:117] "RemoveContainer" containerID="65531830dfcf0527408f3541b1ea2129a2d06ceab4ad87597357cd6abf97f1cb" Feb 03 07:09:42 crc kubenswrapper[4872]: I0203 07:09:42.050139 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fm59f"] Feb 03 07:09:42 crc kubenswrapper[4872]: I0203 07:09:42.132172 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" path="/var/lib/kubelet/pods/d87ee4a5-5647-4a12-9d9b-c1c07be8b33c/volumes" Feb 03 07:09:46 crc kubenswrapper[4872]: I0203 07:09:46.005029 4872 generic.go:334] "Generic (PLEG): container finished" podID="becabfe2-053a-44ba-888c-8e32f14ae4da" containerID="c1e0d1410d252555d0eebb8b887248d9ef62ff9b9273bdbd244078b2c249143d" exitCode=0 Feb 03 07:09:46 crc kubenswrapper[4872]: I0203 07:09:46.005107 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/crc-debug-s7bdg" event={"ID":"becabfe2-053a-44ba-888c-8e32f14ae4da","Type":"ContainerDied","Data":"c1e0d1410d252555d0eebb8b887248d9ef62ff9b9273bdbd244078b2c249143d"} Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.131955 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.163768 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6qslc/crc-debug-s7bdg"] Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.177479 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6qslc/crc-debug-s7bdg"] Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.192066 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becabfe2-053a-44ba-888c-8e32f14ae4da-host\") pod \"becabfe2-053a-44ba-888c-8e32f14ae4da\" (UID: \"becabfe2-053a-44ba-888c-8e32f14ae4da\") " Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.192208 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4pzm\" (UniqueName: \"kubernetes.io/projected/becabfe2-053a-44ba-888c-8e32f14ae4da-kube-api-access-m4pzm\") pod \"becabfe2-053a-44ba-888c-8e32f14ae4da\" (UID: \"becabfe2-053a-44ba-888c-8e32f14ae4da\") " Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.192434 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becabfe2-053a-44ba-888c-8e32f14ae4da-host" (OuterVolumeSpecName: "host") pod "becabfe2-053a-44ba-888c-8e32f14ae4da" (UID: "becabfe2-053a-44ba-888c-8e32f14ae4da"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.192916 4872 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/becabfe2-053a-44ba-888c-8e32f14ae4da-host\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.197249 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becabfe2-053a-44ba-888c-8e32f14ae4da-kube-api-access-m4pzm" (OuterVolumeSpecName: "kube-api-access-m4pzm") pod "becabfe2-053a-44ba-888c-8e32f14ae4da" (UID: "becabfe2-053a-44ba-888c-8e32f14ae4da"). InnerVolumeSpecName "kube-api-access-m4pzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:09:47 crc kubenswrapper[4872]: I0203 07:09:47.294500 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4pzm\" (UniqueName: \"kubernetes.io/projected/becabfe2-053a-44ba-888c-8e32f14ae4da-kube-api-access-m4pzm\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.021472 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e303d9784258d7d4f604134698bbac9e7ad6e69aad2ff88b3b126fd4bf1f4c07" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.021504 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-s7bdg" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.135879 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becabfe2-053a-44ba-888c-8e32f14ae4da" path="/var/lib/kubelet/pods/becabfe2-053a-44ba-888c-8e32f14ae4da/volumes" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.703040 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6qslc/crc-debug-khs4k"] Feb 03 07:09:48 crc kubenswrapper[4872]: E0203 07:09:48.705913 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becabfe2-053a-44ba-888c-8e32f14ae4da" containerName="container-00" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.705941 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="becabfe2-053a-44ba-888c-8e32f14ae4da" containerName="container-00" Feb 03 07:09:48 crc kubenswrapper[4872]: E0203 07:09:48.705961 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.705970 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" Feb 03 07:09:48 crc kubenswrapper[4872]: E0203 07:09:48.705986 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="extract-content" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.705996 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="extract-content" Feb 03 07:09:48 crc kubenswrapper[4872]: E0203 07:09:48.706007 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="extract-utilities" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.706015 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="extract-utilities" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.706269 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="becabfe2-053a-44ba-888c-8e32f14ae4da" containerName="container-00" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.706295 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87ee4a5-5647-4a12-9d9b-c1c07be8b33c" containerName="registry-server" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.707528 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.822655 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7947d82-63c9-4f8c-8510-1aa919c4482b-host\") pod \"crc-debug-khs4k\" (UID: \"f7947d82-63c9-4f8c-8510-1aa919c4482b\") " pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.822775 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmh9r\" (UniqueName: \"kubernetes.io/projected/f7947d82-63c9-4f8c-8510-1aa919c4482b-kube-api-access-hmh9r\") pod \"crc-debug-khs4k\" (UID: \"f7947d82-63c9-4f8c-8510-1aa919c4482b\") " pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.923865 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmh9r\" (UniqueName: \"kubernetes.io/projected/f7947d82-63c9-4f8c-8510-1aa919c4482b-kube-api-access-hmh9r\") pod \"crc-debug-khs4k\" (UID: \"f7947d82-63c9-4f8c-8510-1aa919c4482b\") " pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.923998 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7947d82-63c9-4f8c-8510-1aa919c4482b-host\") pod \"crc-debug-khs4k\" (UID: \"f7947d82-63c9-4f8c-8510-1aa919c4482b\") " pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.924147 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7947d82-63c9-4f8c-8510-1aa919c4482b-host\") pod \"crc-debug-khs4k\" (UID: \"f7947d82-63c9-4f8c-8510-1aa919c4482b\") " pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:48 crc kubenswrapper[4872]: I0203 07:09:48.946433 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmh9r\" (UniqueName: \"kubernetes.io/projected/f7947d82-63c9-4f8c-8510-1aa919c4482b-kube-api-access-hmh9r\") pod \"crc-debug-khs4k\" (UID: \"f7947d82-63c9-4f8c-8510-1aa919c4482b\") " pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:49 crc kubenswrapper[4872]: I0203 07:09:49.025672 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:50 crc kubenswrapper[4872]: I0203 07:09:50.038835 4872 generic.go:334] "Generic (PLEG): container finished" podID="f7947d82-63c9-4f8c-8510-1aa919c4482b" containerID="38eef8d28ab440c9b22f45d3c4864eec4772772c34980450ad993ef52370667a" exitCode=0 Feb 03 07:09:50 crc kubenswrapper[4872]: I0203 07:09:50.038913 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/crc-debug-khs4k" event={"ID":"f7947d82-63c9-4f8c-8510-1aa919c4482b","Type":"ContainerDied","Data":"38eef8d28ab440c9b22f45d3c4864eec4772772c34980450ad993ef52370667a"} Feb 03 07:09:50 crc kubenswrapper[4872]: I0203 07:09:50.039211 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/crc-debug-khs4k" event={"ID":"f7947d82-63c9-4f8c-8510-1aa919c4482b","Type":"ContainerStarted","Data":"3e28861b6ce5001f7e4575a55d9f8c612c70a7997c54c86ebafaeaa9a317773f"} Feb 03 07:09:51 crc kubenswrapper[4872]: I0203 07:09:51.376903 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:51 crc kubenswrapper[4872]: I0203 07:09:51.476953 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7947d82-63c9-4f8c-8510-1aa919c4482b-host\") pod \"f7947d82-63c9-4f8c-8510-1aa919c4482b\" (UID: \"f7947d82-63c9-4f8c-8510-1aa919c4482b\") " Feb 03 07:09:51 crc kubenswrapper[4872]: I0203 07:09:51.477023 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmh9r\" (UniqueName: \"kubernetes.io/projected/f7947d82-63c9-4f8c-8510-1aa919c4482b-kube-api-access-hmh9r\") pod \"f7947d82-63c9-4f8c-8510-1aa919c4482b\" (UID: \"f7947d82-63c9-4f8c-8510-1aa919c4482b\") " Feb 03 07:09:51 crc kubenswrapper[4872]: I0203 07:09:51.479043 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7947d82-63c9-4f8c-8510-1aa919c4482b-host" (OuterVolumeSpecName: "host") pod "f7947d82-63c9-4f8c-8510-1aa919c4482b" (UID: "f7947d82-63c9-4f8c-8510-1aa919c4482b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 07:09:51 crc kubenswrapper[4872]: I0203 07:09:51.514048 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7947d82-63c9-4f8c-8510-1aa919c4482b-kube-api-access-hmh9r" (OuterVolumeSpecName: "kube-api-access-hmh9r") pod "f7947d82-63c9-4f8c-8510-1aa919c4482b" (UID: "f7947d82-63c9-4f8c-8510-1aa919c4482b"). InnerVolumeSpecName "kube-api-access-hmh9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:09:51 crc kubenswrapper[4872]: I0203 07:09:51.579437 4872 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f7947d82-63c9-4f8c-8510-1aa919c4482b-host\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:51 crc kubenswrapper[4872]: I0203 07:09:51.579473 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmh9r\" (UniqueName: \"kubernetes.io/projected/f7947d82-63c9-4f8c-8510-1aa919c4482b-kube-api-access-hmh9r\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:52 crc kubenswrapper[4872]: I0203 07:09:52.060035 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/crc-debug-khs4k" event={"ID":"f7947d82-63c9-4f8c-8510-1aa919c4482b","Type":"ContainerDied","Data":"3e28861b6ce5001f7e4575a55d9f8c612c70a7997c54c86ebafaeaa9a317773f"} Feb 03 07:09:52 crc kubenswrapper[4872]: I0203 07:09:52.060310 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e28861b6ce5001f7e4575a55d9f8c612c70a7997c54c86ebafaeaa9a317773f" Feb 03 07:09:52 crc kubenswrapper[4872]: I0203 07:09:52.060429 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-khs4k" Feb 03 07:09:52 crc kubenswrapper[4872]: I0203 07:09:52.614511 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6qslc/crc-debug-khs4k"] Feb 03 07:09:52 crc kubenswrapper[4872]: I0203 07:09:52.626978 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6qslc/crc-debug-khs4k"] Feb 03 07:09:53 crc kubenswrapper[4872]: I0203 07:09:53.777466 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6qslc/crc-debug-r5p6b"] Feb 03 07:09:53 crc kubenswrapper[4872]: E0203 07:09:53.778244 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7947d82-63c9-4f8c-8510-1aa919c4482b" containerName="container-00" Feb 03 07:09:53 crc kubenswrapper[4872]: I0203 07:09:53.778262 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7947d82-63c9-4f8c-8510-1aa919c4482b" containerName="container-00" Feb 03 07:09:53 crc kubenswrapper[4872]: I0203 07:09:53.778500 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7947d82-63c9-4f8c-8510-1aa919c4482b" containerName="container-00" Feb 03 07:09:53 crc kubenswrapper[4872]: I0203 07:09:53.779273 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:53 crc kubenswrapper[4872]: I0203 07:09:53.931491 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3021e59-d244-4761-8e4c-605be0862cd6-host\") pod \"crc-debug-r5p6b\" (UID: \"f3021e59-d244-4761-8e4c-605be0862cd6\") " pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:53 crc kubenswrapper[4872]: I0203 07:09:53.931598 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjbgr\" (UniqueName: \"kubernetes.io/projected/f3021e59-d244-4761-8e4c-605be0862cd6-kube-api-access-kjbgr\") pod \"crc-debug-r5p6b\" (UID: \"f3021e59-d244-4761-8e4c-605be0862cd6\") " pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.033181 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3021e59-d244-4761-8e4c-605be0862cd6-host\") pod \"crc-debug-r5p6b\" (UID: \"f3021e59-d244-4761-8e4c-605be0862cd6\") " pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.033249 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjbgr\" (UniqueName: \"kubernetes.io/projected/f3021e59-d244-4761-8e4c-605be0862cd6-kube-api-access-kjbgr\") pod \"crc-debug-r5p6b\" (UID: \"f3021e59-d244-4761-8e4c-605be0862cd6\") " pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.033381 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3021e59-d244-4761-8e4c-605be0862cd6-host\") pod \"crc-debug-r5p6b\" (UID: \"f3021e59-d244-4761-8e4c-605be0862cd6\") " pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.052021 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjbgr\" (UniqueName: \"kubernetes.io/projected/f3021e59-d244-4761-8e4c-605be0862cd6-kube-api-access-kjbgr\") pod \"crc-debug-r5p6b\" (UID: \"f3021e59-d244-4761-8e4c-605be0862cd6\") " pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.099506 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.140266 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7947d82-63c9-4f8c-8510-1aa919c4482b" path="/var/lib/kubelet/pods/f7947d82-63c9-4f8c-8510-1aa919c4482b/volumes" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.430441 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4dvj4"] Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.432478 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.449876 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dvj4"] Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.547440 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-utilities\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.547741 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbh7c\" (UniqueName: \"kubernetes.io/projected/6a47554e-d09d-468c-8226-4d2458444be2-kube-api-access-nbh7c\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.547857 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-catalog-content\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.649662 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-utilities\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.649765 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbh7c\" (UniqueName: \"kubernetes.io/projected/6a47554e-d09d-468c-8226-4d2458444be2-kube-api-access-nbh7c\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.649808 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-catalog-content\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.650455 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-catalog-content\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.650639 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-utilities\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.684036 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbh7c\" (UniqueName: \"kubernetes.io/projected/6a47554e-d09d-468c-8226-4d2458444be2-kube-api-access-nbh7c\") pod \"certified-operators-4dvj4\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:54 crc kubenswrapper[4872]: I0203 07:09:54.767397 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:09:55 crc kubenswrapper[4872]: I0203 07:09:55.108364 4872 generic.go:334] "Generic (PLEG): container finished" podID="f3021e59-d244-4761-8e4c-605be0862cd6" containerID="2e8bbd722340cbfa356ef8781d10f2656bca061e94e86c8988d7d82b7c6bb4bb" exitCode=0 Feb 03 07:09:55 crc kubenswrapper[4872]: I0203 07:09:55.109379 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/crc-debug-r5p6b" event={"ID":"f3021e59-d244-4761-8e4c-605be0862cd6","Type":"ContainerDied","Data":"2e8bbd722340cbfa356ef8781d10f2656bca061e94e86c8988d7d82b7c6bb4bb"} Feb 03 07:09:55 crc kubenswrapper[4872]: I0203 07:09:55.109415 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/crc-debug-r5p6b" event={"ID":"f3021e59-d244-4761-8e4c-605be0862cd6","Type":"ContainerStarted","Data":"74ddf80f73cb548445d49276989952935a048fee397c242a6dafb568b97ee1a0"} Feb 03 07:09:55 crc kubenswrapper[4872]: I0203 07:09:55.168736 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6qslc/crc-debug-r5p6b"] Feb 03 07:09:55 crc kubenswrapper[4872]: I0203 07:09:55.176424 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6qslc/crc-debug-r5p6b"] Feb 03 07:09:55 crc kubenswrapper[4872]: I0203 07:09:55.381426 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dvj4"] Feb 03 07:09:55 crc kubenswrapper[4872]: W0203 07:09:55.391609 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a47554e_d09d_468c_8226_4d2458444be2.slice/crio-e10f02f7ec71766d4d3017d16c622abc8cc40e8fb5793e170e19852d72021512 WatchSource:0}: Error finding container e10f02f7ec71766d4d3017d16c622abc8cc40e8fb5793e170e19852d72021512: Status 404 returned error can't find the container with id e10f02f7ec71766d4d3017d16c622abc8cc40e8fb5793e170e19852d72021512 Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.120609 4872 generic.go:334] "Generic (PLEG): container finished" podID="6a47554e-d09d-468c-8226-4d2458444be2" containerID="fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4" exitCode=0 Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.120729 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dvj4" event={"ID":"6a47554e-d09d-468c-8226-4d2458444be2","Type":"ContainerDied","Data":"fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4"} Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.120920 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dvj4" event={"ID":"6a47554e-d09d-468c-8226-4d2458444be2","Type":"ContainerStarted","Data":"e10f02f7ec71766d4d3017d16c622abc8cc40e8fb5793e170e19852d72021512"} Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.260068 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.420573 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjbgr\" (UniqueName: \"kubernetes.io/projected/f3021e59-d244-4761-8e4c-605be0862cd6-kube-api-access-kjbgr\") pod \"f3021e59-d244-4761-8e4c-605be0862cd6\" (UID: \"f3021e59-d244-4761-8e4c-605be0862cd6\") " Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.420665 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3021e59-d244-4761-8e4c-605be0862cd6-host\") pod \"f3021e59-d244-4761-8e4c-605be0862cd6\" (UID: \"f3021e59-d244-4761-8e4c-605be0862cd6\") " Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.420747 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3021e59-d244-4761-8e4c-605be0862cd6-host" (OuterVolumeSpecName: "host") pod "f3021e59-d244-4761-8e4c-605be0862cd6" (UID: "f3021e59-d244-4761-8e4c-605be0862cd6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.421324 4872 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f3021e59-d244-4761-8e4c-605be0862cd6-host\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.426005 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3021e59-d244-4761-8e4c-605be0862cd6-kube-api-access-kjbgr" (OuterVolumeSpecName: "kube-api-access-kjbgr") pod "f3021e59-d244-4761-8e4c-605be0862cd6" (UID: "f3021e59-d244-4761-8e4c-605be0862cd6"). InnerVolumeSpecName "kube-api-access-kjbgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:09:56 crc kubenswrapper[4872]: I0203 07:09:56.523699 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjbgr\" (UniqueName: \"kubernetes.io/projected/f3021e59-d244-4761-8e4c-605be0862cd6-kube-api-access-kjbgr\") on node \"crc\" DevicePath \"\"" Feb 03 07:09:57 crc kubenswrapper[4872]: I0203 07:09:57.132479 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dvj4" event={"ID":"6a47554e-d09d-468c-8226-4d2458444be2","Type":"ContainerStarted","Data":"0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48"} Feb 03 07:09:57 crc kubenswrapper[4872]: I0203 07:09:57.135237 4872 scope.go:117] "RemoveContainer" containerID="2e8bbd722340cbfa356ef8781d10f2656bca061e94e86c8988d7d82b7c6bb4bb" Feb 03 07:09:57 crc kubenswrapper[4872]: I0203 07:09:57.135365 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/crc-debug-r5p6b" Feb 03 07:09:58 crc kubenswrapper[4872]: I0203 07:09:58.137646 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3021e59-d244-4761-8e4c-605be0862cd6" path="/var/lib/kubelet/pods/f3021e59-d244-4761-8e4c-605be0862cd6/volumes" Feb 03 07:09:59 crc kubenswrapper[4872]: I0203 07:09:59.162119 4872 generic.go:334] "Generic (PLEG): container finished" podID="6a47554e-d09d-468c-8226-4d2458444be2" containerID="0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48" exitCode=0 Feb 03 07:09:59 crc kubenswrapper[4872]: I0203 07:09:59.162304 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dvj4" event={"ID":"6a47554e-d09d-468c-8226-4d2458444be2","Type":"ContainerDied","Data":"0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48"} Feb 03 07:10:00 crc kubenswrapper[4872]: I0203 07:10:00.177245 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dvj4" event={"ID":"6a47554e-d09d-468c-8226-4d2458444be2","Type":"ContainerStarted","Data":"aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819"} Feb 03 07:10:00 crc kubenswrapper[4872]: I0203 07:10:00.219151 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4dvj4" podStartSLOduration=2.779218174 podStartE2EDuration="6.219118568s" podCreationTimestamp="2026-02-03 07:09:54 +0000 UTC" firstStartedPulling="2026-02-03 07:09:56.123966991 +0000 UTC m=+4166.706658405" lastFinishedPulling="2026-02-03 07:09:59.563867385 +0000 UTC m=+4170.146558799" observedRunningTime="2026-02-03 07:10:00.208310687 +0000 UTC m=+4170.791002121" watchObservedRunningTime="2026-02-03 07:10:00.219118568 +0000 UTC m=+4170.801809992" Feb 03 07:10:01 crc kubenswrapper[4872]: I0203 07:10:01.271547 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:10:01 crc kubenswrapper[4872]: I0203 07:10:01.271909 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:10:04 crc kubenswrapper[4872]: I0203 07:10:04.768419 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:10:04 crc kubenswrapper[4872]: I0203 07:10:04.770601 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:10:04 crc kubenswrapper[4872]: I0203 07:10:04.818306 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:10:05 crc kubenswrapper[4872]: I0203 07:10:05.311128 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:10:05 crc kubenswrapper[4872]: I0203 07:10:05.360878 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dvj4"] Feb 03 07:10:07 crc kubenswrapper[4872]: I0203 07:10:07.256123 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4dvj4" podUID="6a47554e-d09d-468c-8226-4d2458444be2" containerName="registry-server" containerID="cri-o://aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819" gracePeriod=2 Feb 03 07:10:07 crc kubenswrapper[4872]: I0203 07:10:07.868284 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:10:07 crc kubenswrapper[4872]: I0203 07:10:07.972858 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-catalog-content\") pod \"6a47554e-d09d-468c-8226-4d2458444be2\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " Feb 03 07:10:07 crc kubenswrapper[4872]: I0203 07:10:07.973025 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-utilities\") pod \"6a47554e-d09d-468c-8226-4d2458444be2\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " Feb 03 07:10:07 crc kubenswrapper[4872]: I0203 07:10:07.973110 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbh7c\" (UniqueName: \"kubernetes.io/projected/6a47554e-d09d-468c-8226-4d2458444be2-kube-api-access-nbh7c\") pod \"6a47554e-d09d-468c-8226-4d2458444be2\" (UID: \"6a47554e-d09d-468c-8226-4d2458444be2\") " Feb 03 07:10:07 crc kubenswrapper[4872]: I0203 07:10:07.973830 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-utilities" (OuterVolumeSpecName: "utilities") pod "6a47554e-d09d-468c-8226-4d2458444be2" (UID: "6a47554e-d09d-468c-8226-4d2458444be2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:10:07 crc kubenswrapper[4872]: I0203 07:10:07.979953 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a47554e-d09d-468c-8226-4d2458444be2-kube-api-access-nbh7c" (OuterVolumeSpecName: "kube-api-access-nbh7c") pod "6a47554e-d09d-468c-8226-4d2458444be2" (UID: "6a47554e-d09d-468c-8226-4d2458444be2"). InnerVolumeSpecName "kube-api-access-nbh7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.075461 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.075490 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbh7c\" (UniqueName: \"kubernetes.io/projected/6a47554e-d09d-468c-8226-4d2458444be2-kube-api-access-nbh7c\") on node \"crc\" DevicePath \"\"" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.266450 4872 generic.go:334] "Generic (PLEG): container finished" podID="6a47554e-d09d-468c-8226-4d2458444be2" containerID="aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819" exitCode=0 Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.266491 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dvj4" event={"ID":"6a47554e-d09d-468c-8226-4d2458444be2","Type":"ContainerDied","Data":"aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819"} Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.266519 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dvj4" event={"ID":"6a47554e-d09d-468c-8226-4d2458444be2","Type":"ContainerDied","Data":"e10f02f7ec71766d4d3017d16c622abc8cc40e8fb5793e170e19852d72021512"} Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.266522 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dvj4" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.266535 4872 scope.go:117] "RemoveContainer" containerID="aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.287397 4872 scope.go:117] "RemoveContainer" containerID="0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.327497 4872 scope.go:117] "RemoveContainer" containerID="fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.364083 4872 scope.go:117] "RemoveContainer" containerID="aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819" Feb 03 07:10:08 crc kubenswrapper[4872]: E0203 07:10:08.364530 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819\": container with ID starting with aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819 not found: ID does not exist" containerID="aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.364630 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819"} err="failed to get container status \"aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819\": rpc error: code = NotFound desc = could not find container \"aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819\": container with ID starting with aaf2a720d783595ce21256d63fd441a7cb549bbac76a36a29bbd663683368819 not found: ID does not exist" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.364784 4872 scope.go:117] "RemoveContainer" containerID="0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48" Feb 03 07:10:08 crc kubenswrapper[4872]: E0203 07:10:08.365102 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48\": container with ID starting with 0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48 not found: ID does not exist" containerID="0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.365125 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48"} err="failed to get container status \"0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48\": rpc error: code = NotFound desc = could not find container \"0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48\": container with ID starting with 0e572951297d6996155245a90df76940ee8c227cfa6f0e0b3c3df66b06507c48 not found: ID does not exist" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.365143 4872 scope.go:117] "RemoveContainer" containerID="fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4" Feb 03 07:10:08 crc kubenswrapper[4872]: E0203 07:10:08.365330 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4\": container with ID starting with fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4 not found: ID does not exist" containerID="fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.365358 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4"} err="failed to get container status \"fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4\": rpc error: code = NotFound desc = could not find container \"fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4\": container with ID starting with fa5951f188491742c13ddb265be4aadb098dd0fd66ba7a2049658466ea84c2b4 not found: ID does not exist" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.436161 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a47554e-d09d-468c-8226-4d2458444be2" (UID: "6a47554e-d09d-468c-8226-4d2458444be2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.482284 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a47554e-d09d-468c-8226-4d2458444be2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.599266 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dvj4"] Feb 03 07:10:08 crc kubenswrapper[4872]: I0203 07:10:08.612314 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4dvj4"] Feb 03 07:10:10 crc kubenswrapper[4872]: I0203 07:10:10.132805 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a47554e-d09d-468c-8226-4d2458444be2" path="/var/lib/kubelet/pods/6a47554e-d09d-468c-8226-4d2458444be2/volumes" Feb 03 07:10:26 crc kubenswrapper[4872]: I0203 07:10:26.873027 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8447df9874-g22nb_f3bfff84-36be-489c-85a2-6e4ebfec4d1a/barbican-api/0.log" Feb 03 07:10:26 crc kubenswrapper[4872]: I0203 07:10:26.915955 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8447df9874-g22nb_f3bfff84-36be-489c-85a2-6e4ebfec4d1a/barbican-api-log/0.log" Feb 03 07:10:27 crc kubenswrapper[4872]: I0203 07:10:27.123505 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-dd8d7b8db-mtx4r_db9d16c1-a901-456f-a48d-a56879b49c8d/barbican-keystone-listener/0.log" Feb 03 07:10:27 crc kubenswrapper[4872]: I0203 07:10:27.230904 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-dd8d7b8db-mtx4r_db9d16c1-a901-456f-a48d-a56879b49c8d/barbican-keystone-listener-log/0.log" Feb 03 07:10:27 crc kubenswrapper[4872]: I0203 07:10:27.450774 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c77557787-rb2tb_8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5/barbican-worker/0.log" Feb 03 07:10:27 crc kubenswrapper[4872]: I0203 07:10:27.492964 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c77557787-rb2tb_8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5/barbican-worker-log/0.log" Feb 03 07:10:27 crc kubenswrapper[4872]: I0203 07:10:27.641920 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8_711888ee-ec08-437f-bf74-54ea092796bf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:27 crc kubenswrapper[4872]: I0203 07:10:27.782757 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_248c9cda-018d-4cea-8dc8-c6a77788155a/ceilometer-central-agent/0.log" Feb 03 07:10:27 crc kubenswrapper[4872]: I0203 07:10:27.844996 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_248c9cda-018d-4cea-8dc8-c6a77788155a/ceilometer-notification-agent/0.log" Feb 03 07:10:27 crc kubenswrapper[4872]: I0203 07:10:27.921397 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_248c9cda-018d-4cea-8dc8-c6a77788155a/proxy-httpd/0.log" Feb 03 07:10:28 crc kubenswrapper[4872]: I0203 07:10:28.053563 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_248c9cda-018d-4cea-8dc8-c6a77788155a/sg-core/0.log" Feb 03 07:10:28 crc kubenswrapper[4872]: I0203 07:10:28.144579 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e1031a8c-c3fb-4022-826e-77509f2a2b2f/cinder-api/0.log" Feb 03 07:10:28 crc kubenswrapper[4872]: I0203 07:10:28.229919 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e1031a8c-c3fb-4022-826e-77509f2a2b2f/cinder-api-log/0.log" Feb 03 07:10:28 crc kubenswrapper[4872]: I0203 07:10:28.990549 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8927k_4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.002414 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1e82d1b0-0354-4126-b305-6af3e5fdcb9a/probe/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.007895 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1e82d1b0-0354-4126-b305-6af3e5fdcb9a/cinder-scheduler/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.222455 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-746ln_c73cade5-ebf2-4b32-9eec-efbc6a089cee/init/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.366913 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld_34dc0856-28c2-4b86-adb9-0310701b5110/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.523785 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-746ln_c73cade5-ebf2-4b32-9eec-efbc6a089cee/init/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.651460 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7_72bf0048-7229-4354-a6e3-1c508f3bacef/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.737323 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-746ln_c73cade5-ebf2-4b32-9eec-efbc6a089cee/dnsmasq-dns/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.906833 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_71811df4-e41d-4e6b-a94c-81e871e39632/glance-log/0.log" Feb 03 07:10:29 crc kubenswrapper[4872]: I0203 07:10:29.921796 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_71811df4-e41d-4e6b-a94c-81e871e39632/glance-httpd/0.log" Feb 03 07:10:30 crc kubenswrapper[4872]: I0203 07:10:30.741786 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_beba659d-d168-47b7-a0ee-f467101ed286/glance-log/0.log" Feb 03 07:10:30 crc kubenswrapper[4872]: I0203 07:10:30.781229 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_beba659d-d168-47b7-a0ee-f467101ed286/glance-httpd/0.log" Feb 03 07:10:31 crc kubenswrapper[4872]: I0203 07:10:31.207353 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57dc94599b-bvf7j_f475ab66-31e6-46da-ad2e-8e8279e33b68/horizon/1.log" Feb 03 07:10:31 crc kubenswrapper[4872]: I0203 07:10:31.237127 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57dc94599b-bvf7j_f475ab66-31e6-46da-ad2e-8e8279e33b68/horizon/0.log" Feb 03 07:10:31 crc kubenswrapper[4872]: I0203 07:10:31.273837 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:10:31 crc kubenswrapper[4872]: I0203 07:10:31.273895 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:10:31 crc kubenswrapper[4872]: I0203 07:10:31.511528 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57dc94599b-bvf7j_f475ab66-31e6-46da-ad2e-8e8279e33b68/horizon-log/0.log" Feb 03 07:10:31 crc kubenswrapper[4872]: I0203 07:10:31.871888 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh_ab6004c7-7d34-42ff-bf95-1358f1abcbf1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:32 crc kubenswrapper[4872]: I0203 07:10:32.487294 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29501701-6l75v_1214a97d-f94a-40bf-88ea-0310ff11684d/keystone-cron/0.log" Feb 03 07:10:32 crc kubenswrapper[4872]: I0203 07:10:32.488239 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d26e8416-4ddb-40f1-bfa1-482da12274a3/kube-state-metrics/0.log" Feb 03 07:10:32 crc kubenswrapper[4872]: I0203 07:10:32.504525 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qckpv_c09861b6-6f3c-496c-a46e-eb7667965fc7/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:32 crc kubenswrapper[4872]: I0203 07:10:32.832017 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs_3d7243a2-dd2b-4462-8313-92e68450f743/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:33 crc kubenswrapper[4872]: I0203 07:10:33.064305 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6447fd6947-kkqrr_885d40a9-ca6e-4beb-9782-35099d10bf35/keystone-api/0.log" Feb 03 07:10:33 crc kubenswrapper[4872]: I0203 07:10:33.445548 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l_c7dd671e-752b-42ed-826a-be9e1bbb8d66/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:33 crc kubenswrapper[4872]: I0203 07:10:33.587801 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84b5664f65-lkwpl_23d834e1-1b59-4463-b893-fd23fa1e7ecd/neutron-httpd/0.log" Feb 03 07:10:33 crc kubenswrapper[4872]: I0203 07:10:33.734147 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84b5664f65-lkwpl_23d834e1-1b59-4463-b893-fd23fa1e7ecd/neutron-api/0.log" Feb 03 07:10:34 crc kubenswrapper[4872]: I0203 07:10:34.485052 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_898b1712-c38c-4438-9fd6-bc94e59b459e/nova-cell0-conductor-conductor/0.log" Feb 03 07:10:35 crc kubenswrapper[4872]: I0203 07:10:35.002824 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d239893e-43bd-4f8f-b03e-6451a16a0865/nova-cell1-conductor-conductor/0.log" Feb 03 07:10:35 crc kubenswrapper[4872]: I0203 07:10:35.240227 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_baca7029-2f99-49c6-810f-7a25a2a853d0/nova-api-log/0.log" Feb 03 07:10:35 crc kubenswrapper[4872]: I0203 07:10:35.245878 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_756dc212-f1ae-44f8-bcb1-d5c4180da686/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 07:10:35 crc kubenswrapper[4872]: I0203 07:10:35.459641 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_baca7029-2f99-49c6-810f-7a25a2a853d0/nova-api-api/0.log" Feb 03 07:10:35 crc kubenswrapper[4872]: I0203 07:10:35.664325 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-m5jcw_74a3f126-057b-4f44-9483-82e6a6a00c90/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:35 crc kubenswrapper[4872]: I0203 07:10:35.765643 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_96d27a75-4427-4ff9-82ad-4672a9d403da/nova-metadata-log/0.log" Feb 03 07:10:36 crc kubenswrapper[4872]: I0203 07:10:36.074103 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a46bed9-4154-4a62-8805-fe67c55a2d89/mysql-bootstrap/0.log" Feb 03 07:10:36 crc kubenswrapper[4872]: I0203 07:10:36.375445 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a46bed9-4154-4a62-8805-fe67c55a2d89/mysql-bootstrap/0.log" Feb 03 07:10:36 crc kubenswrapper[4872]: I0203 07:10:36.376453 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a46bed9-4154-4a62-8805-fe67c55a2d89/galera/0.log" Feb 03 07:10:36 crc kubenswrapper[4872]: I0203 07:10:36.480672 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e5a4b0fb-cc06-47b7-b789-9d321718a06c/nova-scheduler-scheduler/0.log" Feb 03 07:10:36 crc kubenswrapper[4872]: I0203 07:10:36.655507 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e939fc-8c23-4843-a7ec-4cbd82d8cff7/mysql-bootstrap/0.log" Feb 03 07:10:37 crc kubenswrapper[4872]: I0203 07:10:37.244042 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e939fc-8c23-4843-a7ec-4cbd82d8cff7/galera/0.log" Feb 03 07:10:37 crc kubenswrapper[4872]: I0203 07:10:37.315129 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e939fc-8c23-4843-a7ec-4cbd82d8cff7/mysql-bootstrap/0.log" Feb 03 07:10:37 crc kubenswrapper[4872]: I0203 07:10:37.457400 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_96d27a75-4427-4ff9-82ad-4672a9d403da/nova-metadata-metadata/0.log" Feb 03 07:10:37 crc kubenswrapper[4872]: I0203 07:10:37.528178 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67/openstackclient/0.log" Feb 03 07:10:37 crc kubenswrapper[4872]: I0203 07:10:37.554234 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xsclk_e868189d-0fbd-45bc-83cb-9b71f951c53f/openstack-network-exporter/0.log" Feb 03 07:10:37 crc kubenswrapper[4872]: I0203 07:10:37.821917 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zbxs4_bfcd6876-7bc4-40d4-94af-6a5c175e7bb0/ovsdb-server-init/0.log" Feb 03 07:10:38 crc kubenswrapper[4872]: I0203 07:10:38.064726 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zbxs4_bfcd6876-7bc4-40d4-94af-6a5c175e7bb0/ovsdb-server-init/0.log" Feb 03 07:10:38 crc kubenswrapper[4872]: I0203 07:10:38.262584 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zbxs4_bfcd6876-7bc4-40d4-94af-6a5c175e7bb0/ovs-vswitchd/0.log" Feb 03 07:10:38 crc kubenswrapper[4872]: I0203 07:10:38.326915 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zbxs4_bfcd6876-7bc4-40d4-94af-6a5c175e7bb0/ovsdb-server/0.log" Feb 03 07:10:38 crc kubenswrapper[4872]: I0203 07:10:38.409736 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wjbc7_19908dab-b232-4cd8-b45b-079cebdee593/ovn-controller/0.log" Feb 03 07:10:38 crc kubenswrapper[4872]: I0203 07:10:38.886949 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9vpmm_604ba5bf-8ae1-4540-9ec8-366de98da8ba/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:39 crc kubenswrapper[4872]: I0203 07:10:39.224451 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e1641148-8016-42db-879c-29e9e04666f3/openstack-network-exporter/0.log" Feb 03 07:10:39 crc kubenswrapper[4872]: I0203 07:10:39.247664 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e1641148-8016-42db-879c-29e9e04666f3/ovn-northd/0.log" Feb 03 07:10:39 crc kubenswrapper[4872]: I0203 07:10:39.335180 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_82f89f1b-12ce-4720-9af7-3d8acb128b65/openstack-network-exporter/0.log" Feb 03 07:10:39 crc kubenswrapper[4872]: I0203 07:10:39.542564 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_82f89f1b-12ce-4720-9af7-3d8acb128b65/ovsdbserver-nb/0.log" Feb 03 07:10:39 crc kubenswrapper[4872]: I0203 07:10:39.629932 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f323a5b2-6517-4f06-baec-308207807af3/openstack-network-exporter/0.log" Feb 03 07:10:39 crc kubenswrapper[4872]: I0203 07:10:39.680363 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f323a5b2-6517-4f06-baec-308207807af3/ovsdbserver-sb/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.115960 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f7ccbfc56-8bmzq_90a09160-e06d-497b-bf29-781a4009c899/placement-api/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.156211 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0d044870-6de9-4816-b9e2-249371dc40e6/setup-container/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.248872 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f7ccbfc56-8bmzq_90a09160-e06d-497b-bf29-781a4009c899/placement-log/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.403394 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0d044870-6de9-4816-b9e2-249371dc40e6/rabbitmq/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.511541 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0d044870-6de9-4816-b9e2-249371dc40e6/setup-container/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.519476 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ee583ece-3623-4df4-b879-4ab45489bb07/setup-container/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.932334 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ee583ece-3623-4df4-b879-4ab45489bb07/setup-container/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.962187 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65_e99e0ea4-b0dd-482a-986f-80eed7253030/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:40 crc kubenswrapper[4872]: I0203 07:10:40.966271 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ee583ece-3623-4df4-b879-4ab45489bb07/rabbitmq/0.log" Feb 03 07:10:41 crc kubenswrapper[4872]: I0203 07:10:41.179640 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t8f24_57a842b6-0ca1-471d-aae3-cb4fa4545417/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:41 crc kubenswrapper[4872]: I0203 07:10:41.229219 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56_861d7992-b778-4bc8-9708-5f94d519db54/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:41 crc kubenswrapper[4872]: I0203 07:10:41.634186 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nj6c7_b78a618f-d5ee-4722-8d29-b142f05127bf/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:41 crc kubenswrapper[4872]: I0203 07:10:41.766766 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fvbc6_60b9b2c5-7f79-49d1-b215-4de0664b44c0/ssh-known-hosts-edpm-deployment/0.log" Feb 03 07:10:42 crc kubenswrapper[4872]: I0203 07:10:42.010852 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f5458fb75-k8gpr_7869dbc8-d72a-47cf-8547-40b91024653f/proxy-server/0.log" Feb 03 07:10:42 crc kubenswrapper[4872]: I0203 07:10:42.167679 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f5458fb75-k8gpr_7869dbc8-d72a-47cf-8547-40b91024653f/proxy-httpd/0.log" Feb 03 07:10:42 crc kubenswrapper[4872]: I0203 07:10:42.206097 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-m4mq8_27de9be5-8c0c-4283-81ab-6ec3706d94c7/swift-ring-rebalance/0.log" Feb 03 07:10:42 crc kubenswrapper[4872]: I0203 07:10:42.363455 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/account-auditor/0.log" Feb 03 07:10:42 crc kubenswrapper[4872]: I0203 07:10:42.707872 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/account-reaper/0.log" Feb 03 07:10:42 crc kubenswrapper[4872]: I0203 07:10:42.838056 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/account-replicator/0.log" Feb 03 07:10:42 crc kubenswrapper[4872]: I0203 07:10:42.949943 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/container-auditor/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.012564 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/account-server/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.039525 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/container-replicator/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.171389 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/container-server/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.283373 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-auditor/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.286645 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/container-updater/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.442041 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-expirer/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.473783 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-replicator/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.555468 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-server/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.643003 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-updater/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.734580 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/swift-recon-cron/0.log" Feb 03 07:10:43 crc kubenswrapper[4872]: I0203 07:10:43.797261 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/rsync/0.log" Feb 03 07:10:44 crc kubenswrapper[4872]: I0203 07:10:44.090183 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ab488c2c-7a02-4e73-8aaa-5e0197d51631/tempest-tests-tempest-tests-runner/0.log" Feb 03 07:10:44 crc kubenswrapper[4872]: I0203 07:10:44.138005 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl_a56903a8-61f2-433b-9ab7-2f96b9f8d15f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:44 crc kubenswrapper[4872]: I0203 07:10:44.429418 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_105cad3e-c6c1-4dfa-93dd-9138d760b916/test-operator-logs-container/0.log" Feb 03 07:10:44 crc kubenswrapper[4872]: I0203 07:10:44.436006 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-fjglw_13b6c575-0d6a-4cf8-867d-3230bdded4e4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:10:58 crc kubenswrapper[4872]: I0203 07:10:58.426574 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ecd2a199-4a3b-4e36-8430-5301d68c1595/memcached/0.log" Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.271281 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.271798 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.271843 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.272510 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.272553 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" gracePeriod=600 Feb 03 07:11:01 crc kubenswrapper[4872]: E0203 07:11:01.393128 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.766732 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" exitCode=0 Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.766775 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4"} Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.766805 4872 scope.go:117] "RemoveContainer" containerID="df6fc6a27a71f15957fc4adf158004172fef9aa3018164e62164ec2c8b39d146" Feb 03 07:11:01 crc kubenswrapper[4872]: I0203 07:11:01.767393 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:11:01 crc kubenswrapper[4872]: E0203 07:11:01.767712 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:11:16 crc kubenswrapper[4872]: I0203 07:11:16.126726 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:11:16 crc kubenswrapper[4872]: E0203 07:11:16.127476 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:11:19 crc kubenswrapper[4872]: I0203 07:11:19.516349 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/util/0.log" Feb 03 07:11:19 crc kubenswrapper[4872]: I0203 07:11:19.621210 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/util/0.log" Feb 03 07:11:19 crc kubenswrapper[4872]: I0203 07:11:19.720164 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/pull/0.log" Feb 03 07:11:19 crc kubenswrapper[4872]: I0203 07:11:19.753956 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/pull/0.log" Feb 03 07:11:19 crc kubenswrapper[4872]: I0203 07:11:19.902070 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/pull/0.log" Feb 03 07:11:19 crc kubenswrapper[4872]: I0203 07:11:19.915712 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/extract/0.log" Feb 03 07:11:19 crc kubenswrapper[4872]: I0203 07:11:19.957483 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/util/0.log" Feb 03 07:11:20 crc kubenswrapper[4872]: I0203 07:11:20.152449 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-zznkj_876b6e4d-32cd-47e3-b748-f9c8ea1d84cf/manager/0.log" Feb 03 07:11:20 crc kubenswrapper[4872]: I0203 07:11:20.226195 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-bw7h4_febe7a4c-e275-4af0-b895-8701c164271c/manager/0.log" Feb 03 07:11:20 crc kubenswrapper[4872]: I0203 07:11:20.497802 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-5chl5_7102e0e7-3daa-4610-b931-ca17c7f08461/manager/0.log" Feb 03 07:11:20 crc kubenswrapper[4872]: I0203 07:11:20.585864 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-x2779_29e7b8a5-19cf-46ea-a135-019d30af35b3/manager/0.log" Feb 03 07:11:20 crc kubenswrapper[4872]: I0203 07:11:20.742815 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-dpd6g_394038df-4d8a-41cc-bb90-02dec7dd1fb3/manager/0.log" Feb 03 07:11:20 crc kubenswrapper[4872]: I0203 07:11:20.818514 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-n2mcv_8fc2acde-dcbe-4d32-ad0e-cd4627c2152b/manager/0.log" Feb 03 07:11:21 crc kubenswrapper[4872]: I0203 07:11:21.150960 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-9qph7_cd3e162d-6733-47c4-b507-c08c577723d0/manager/0.log" Feb 03 07:11:21 crc kubenswrapper[4872]: I0203 07:11:21.186624 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-lnvft_7319691f-007c-45cd-bd1b-11055339e2ab/manager/0.log" Feb 03 07:11:21 crc kubenswrapper[4872]: I0203 07:11:21.336989 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-9x5w8_71308b40-7203-4586-9a21-9b4621a9aaf7/manager/0.log" Feb 03 07:11:21 crc kubenswrapper[4872]: I0203 07:11:21.431111 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-dvqpz_c3aba523-0e11-4e5d-9adf-be5978a1f4e1/manager/0.log" Feb 03 07:11:21 crc kubenswrapper[4872]: I0203 07:11:21.618282 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-ww2cx_cfe508f3-98be-48d5-bf5b-3cb24a9ba131/manager/0.log" Feb 03 07:11:21 crc kubenswrapper[4872]: I0203 07:11:21.712816 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-czpn5_71efcd75-c242-4036-b2e0-fdb117880dd9/manager/0.log" Feb 03 07:11:21 crc kubenswrapper[4872]: I0203 07:11:21.843565 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-rjz4z_026cffca-2976-4ba1-8bb6-3e86c4521166/manager/0.log" Feb 03 07:11:21 crc kubenswrapper[4872]: I0203 07:11:21.921189 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-p475q_e7d3c449-bd59-48b2-9047-2c7589cdf51a/manager/0.log" Feb 03 07:11:22 crc kubenswrapper[4872]: I0203 07:11:22.018138 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4_08ed93ce-02ea-45af-b481-69ed92f5aff5/manager/0.log" Feb 03 07:11:22 crc kubenswrapper[4872]: I0203 07:11:22.213808 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-67c68487b9-mxm2b_69023f68-79e2-4de9-a210-32ecce7b635b/operator/0.log" Feb 03 07:11:22 crc kubenswrapper[4872]: I0203 07:11:22.937293 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5nlb5_89cb3a32-685e-40fa-9370-374e91db24dd/registry-server/0.log" Feb 03 07:11:23 crc kubenswrapper[4872]: I0203 07:11:23.324845 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-lkxq4_584404c0-4ffd-43f5-a06f-009650dc0cc9/manager/0.log" Feb 03 07:11:23 crc kubenswrapper[4872]: I0203 07:11:23.475573 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-vl4tx_9bb5cb68-4c55-4c47-beb8-a9caa56db1b3/manager/0.log" Feb 03 07:11:23 crc kubenswrapper[4872]: I0203 07:11:23.700013 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5bc755b6c5-ptv7t_9ee72576-2dc3-4b0b-ba3d-38aa27fba615/manager/0.log" Feb 03 07:11:23 crc kubenswrapper[4872]: I0203 07:11:23.702821 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mvdpp_71ed58d0-78f2-497b-8802-3647a361c99b/operator/0.log" Feb 03 07:11:23 crc kubenswrapper[4872]: I0203 07:11:23.813522 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-w7ncd_d9a67e95-335b-40cf-af71-4f3fd69a1fd9/manager/0.log" Feb 03 07:11:23 crc kubenswrapper[4872]: I0203 07:11:23.976515 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-29rp5_d843f756-0ec0-4a79-b34d-14e257e22102/manager/0.log" Feb 03 07:11:24 crc kubenswrapper[4872]: I0203 07:11:24.438890 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-rlwfv_e0b27752-d9b8-4bd6-92c4-253508657db5/manager/0.log" Feb 03 07:11:24 crc kubenswrapper[4872]: I0203 07:11:24.519319 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-t4l5f_39f4fe96-ca54-4135-9eb3-e40a187e54a4/manager/0.log" Feb 03 07:11:29 crc kubenswrapper[4872]: I0203 07:11:29.122647 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:11:29 crc kubenswrapper[4872]: E0203 07:11:29.123444 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:11:42 crc kubenswrapper[4872]: I0203 07:11:42.123068 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:11:42 crc kubenswrapper[4872]: E0203 07:11:42.125108 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:11:44 crc kubenswrapper[4872]: I0203 07:11:44.452461 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zvqv2_3c1bfc9a-db7c-49a5-acd6-05ad2a616cae/control-plane-machine-set-operator/0.log" Feb 03 07:11:44 crc kubenswrapper[4872]: I0203 07:11:44.709671 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qcvzs_a611c711-1f25-4e6e-983c-17c001aaeabd/kube-rbac-proxy/0.log" Feb 03 07:11:44 crc kubenswrapper[4872]: I0203 07:11:44.727601 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qcvzs_a611c711-1f25-4e6e-983c-17c001aaeabd/machine-api-operator/0.log" Feb 03 07:11:56 crc kubenswrapper[4872]: I0203 07:11:56.123105 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:11:56 crc kubenswrapper[4872]: E0203 07:11:56.123934 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:11:58 crc kubenswrapper[4872]: I0203 07:11:58.453106 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vnbj8_d2894283-ae8b-4bb5-a0d0-825d14b8a2bc/cert-manager-controller/0.log" Feb 03 07:11:58 crc kubenswrapper[4872]: I0203 07:11:58.599893 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6kgzc_93180a09-56e7-468c-9181-e88473627564/cert-manager-cainjector/0.log" Feb 03 07:11:58 crc kubenswrapper[4872]: I0203 07:11:58.696437 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-x86ch_958bf8fc-d47f-45ff-b237-64ea37f16e2d/cert-manager-webhook/0.log" Feb 03 07:12:09 crc kubenswrapper[4872]: I0203 07:12:09.123185 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:12:09 crc kubenswrapper[4872]: E0203 07:12:09.124340 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:12:13 crc kubenswrapper[4872]: I0203 07:12:13.779557 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-pnq7p_724ba92c-602b-4031-8ad1-7e5b084c4386/nmstate-console-plugin/0.log" Feb 03 07:12:14 crc kubenswrapper[4872]: I0203 07:12:14.052312 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4rnzm_d985e2b5-be7b-4e11-835f-0fbb14859743/nmstate-handler/0.log" Feb 03 07:12:14 crc kubenswrapper[4872]: I0203 07:12:14.126264 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-22pzf_eecf7d0c-c77c-4bb5-9588-24b324a7848f/kube-rbac-proxy/0.log" Feb 03 07:12:14 crc kubenswrapper[4872]: I0203 07:12:14.251527 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-22pzf_eecf7d0c-c77c-4bb5-9588-24b324a7848f/nmstate-metrics/0.log" Feb 03 07:12:14 crc kubenswrapper[4872]: I0203 07:12:14.301177 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jnk4k_0a2cf0fd-a05d-4b50-a0fa-727e373679c2/nmstate-operator/0.log" Feb 03 07:12:14 crc kubenswrapper[4872]: I0203 07:12:14.484034 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-l6d5d_79f465a2-4fb7-470a-83ba-ed5d98e5227b/nmstate-webhook/0.log" Feb 03 07:12:21 crc kubenswrapper[4872]: I0203 07:12:21.122780 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:12:21 crc kubenswrapper[4872]: E0203 07:12:21.123572 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:12:32 crc kubenswrapper[4872]: I0203 07:12:32.123087 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:12:32 crc kubenswrapper[4872]: E0203 07:12:32.124303 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:12:44 crc kubenswrapper[4872]: I0203 07:12:44.427095 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tb2wg_fb6f2971-eff5-4e61-8584-073de69e2e5f/controller/0.log" Feb 03 07:12:44 crc kubenswrapper[4872]: I0203 07:12:44.448764 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tb2wg_fb6f2971-eff5-4e61-8584-073de69e2e5f/kube-rbac-proxy/0.log" Feb 03 07:12:44 crc kubenswrapper[4872]: I0203 07:12:44.569978 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-frr-files/0.log" Feb 03 07:12:44 crc kubenswrapper[4872]: I0203 07:12:44.777333 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-frr-files/0.log" Feb 03 07:12:44 crc kubenswrapper[4872]: I0203 07:12:44.781153 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-metrics/0.log" Feb 03 07:12:44 crc kubenswrapper[4872]: I0203 07:12:44.792475 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-reloader/0.log" Feb 03 07:12:44 crc kubenswrapper[4872]: I0203 07:12:44.853823 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-reloader/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.032348 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-frr-files/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.096141 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-reloader/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.108740 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-metrics/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.157907 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-metrics/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.312134 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-frr-files/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.341368 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/controller/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.344464 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-metrics/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.387467 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-reloader/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.642486 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/kube-rbac-proxy/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.661734 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/frr-metrics/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.708750 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/kube-rbac-proxy-frr/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.877387 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/reloader/0.log" Feb 03 07:12:45 crc kubenswrapper[4872]: I0203 07:12:45.986395 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-grdt7_69417317-0a8d-4c10-8f4c-fe8e387b678e/frr-k8s-webhook-server/0.log" Feb 03 07:12:46 crc kubenswrapper[4872]: I0203 07:12:46.125106 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:12:46 crc kubenswrapper[4872]: E0203 07:12:46.125833 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:12:46 crc kubenswrapper[4872]: I0203 07:12:46.294312 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-fdc65c4dc-rczwz_80b94f6b-5ca4-4650-b58f-df22137e4c04/manager/0.log" Feb 03 07:12:46 crc kubenswrapper[4872]: I0203 07:12:46.423311 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-74df9ff78b-5dz89_c81e20ac-e8e8-4f44-ba9d-f52c5c30849b/webhook-server/0.log" Feb 03 07:12:46 crc kubenswrapper[4872]: I0203 07:12:46.626255 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4qtc9_21971f1c-c210-4df4-942c-4637ecdbcd75/kube-rbac-proxy/0.log" Feb 03 07:12:47 crc kubenswrapper[4872]: I0203 07:12:47.111284 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/frr/0.log" Feb 03 07:12:47 crc kubenswrapper[4872]: I0203 07:12:47.137776 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4qtc9_21971f1c-c210-4df4-942c-4637ecdbcd75/speaker/0.log" Feb 03 07:12:59 crc kubenswrapper[4872]: I0203 07:12:59.123108 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:12:59 crc kubenswrapper[4872]: E0203 07:12:59.124853 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:13:01 crc kubenswrapper[4872]: I0203 07:13:01.230113 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/util/0.log" Feb 03 07:13:01 crc kubenswrapper[4872]: I0203 07:13:01.954964 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/pull/0.log" Feb 03 07:13:01 crc kubenswrapper[4872]: I0203 07:13:01.955051 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/pull/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.003647 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/util/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.185419 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/util/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.230454 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/extract/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.231609 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/pull/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.374521 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/util/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.593581 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/pull/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.660278 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/pull/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.725526 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/util/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.825201 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/util/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.843444 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/pull/0.log" Feb 03 07:13:02 crc kubenswrapper[4872]: I0203 07:13:02.913059 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/extract/0.log" Feb 03 07:13:03 crc kubenswrapper[4872]: I0203 07:13:03.044075 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-utilities/0.log" Feb 03 07:13:03 crc kubenswrapper[4872]: I0203 07:13:03.554101 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-utilities/0.log" Feb 03 07:13:03 crc kubenswrapper[4872]: I0203 07:13:03.695835 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-content/0.log" Feb 03 07:13:03 crc kubenswrapper[4872]: I0203 07:13:03.719915 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-content/0.log" Feb 03 07:13:03 crc kubenswrapper[4872]: I0203 07:13:03.874394 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-content/0.log" Feb 03 07:13:03 crc kubenswrapper[4872]: I0203 07:13:03.901645 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-utilities/0.log" Feb 03 07:13:04 crc kubenswrapper[4872]: I0203 07:13:04.153718 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-utilities/0.log" Feb 03 07:13:04 crc kubenswrapper[4872]: I0203 07:13:04.445852 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/registry-server/0.log" Feb 03 07:13:04 crc kubenswrapper[4872]: I0203 07:13:04.468920 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-utilities/0.log" Feb 03 07:13:04 crc kubenswrapper[4872]: I0203 07:13:04.548202 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-content/0.log" Feb 03 07:13:04 crc kubenswrapper[4872]: I0203 07:13:04.566469 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-content/0.log" Feb 03 07:13:04 crc kubenswrapper[4872]: I0203 07:13:04.765533 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-utilities/0.log" Feb 03 07:13:04 crc kubenswrapper[4872]: I0203 07:13:04.949562 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-content/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.156164 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7sj6w_c4bf20b9-bd1d-4d8b-8547-500924c14af5/marketplace-operator/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.200085 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-utilities/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.527308 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-utilities/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.558023 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-content/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.658511 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-content/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.723891 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/registry-server/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.769792 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-utilities/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.794011 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-content/0.log" Feb 03 07:13:05 crc kubenswrapper[4872]: I0203 07:13:05.997670 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-utilities/0.log" Feb 03 07:13:06 crc kubenswrapper[4872]: I0203 07:13:06.118765 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/registry-server/0.log" Feb 03 07:13:06 crc kubenswrapper[4872]: I0203 07:13:06.250936 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-utilities/0.log" Feb 03 07:13:06 crc kubenswrapper[4872]: I0203 07:13:06.326178 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-content/0.log" Feb 03 07:13:06 crc kubenswrapper[4872]: I0203 07:13:06.326404 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-content/0.log" Feb 03 07:13:06 crc kubenswrapper[4872]: I0203 07:13:06.487023 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-utilities/0.log" Feb 03 07:13:06 crc kubenswrapper[4872]: I0203 07:13:06.498890 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-content/0.log" Feb 03 07:13:07 crc kubenswrapper[4872]: I0203 07:13:07.050384 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/registry-server/0.log" Feb 03 07:13:14 crc kubenswrapper[4872]: I0203 07:13:14.122794 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:13:14 crc kubenswrapper[4872]: E0203 07:13:14.123453 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:13:28 crc kubenswrapper[4872]: I0203 07:13:28.122822 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:13:28 crc kubenswrapper[4872]: E0203 07:13:28.123631 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:13:42 crc kubenswrapper[4872]: I0203 07:13:42.127357 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:13:42 crc kubenswrapper[4872]: E0203 07:13:42.128219 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:13:53 crc kubenswrapper[4872]: I0203 07:13:53.122831 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:13:53 crc kubenswrapper[4872]: E0203 07:13:53.123618 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:14:04 crc kubenswrapper[4872]: I0203 07:14:04.122359 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:14:04 crc kubenswrapper[4872]: E0203 07:14:04.123121 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:14:19 crc kubenswrapper[4872]: I0203 07:14:19.123621 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:14:19 crc kubenswrapper[4872]: E0203 07:14:19.125470 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:14:32 crc kubenswrapper[4872]: I0203 07:14:32.123050 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:14:32 crc kubenswrapper[4872]: E0203 07:14:32.124062 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:14:46 crc kubenswrapper[4872]: I0203 07:14:46.123386 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:14:46 crc kubenswrapper[4872]: E0203 07:14:46.124254 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:14:58 crc kubenswrapper[4872]: I0203 07:14:58.128673 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:14:58 crc kubenswrapper[4872]: E0203 07:14:58.130444 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.182733 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l"] Feb 03 07:15:00 crc kubenswrapper[4872]: E0203 07:15:00.183408 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a47554e-d09d-468c-8226-4d2458444be2" containerName="registry-server" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.183421 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a47554e-d09d-468c-8226-4d2458444be2" containerName="registry-server" Feb 03 07:15:00 crc kubenswrapper[4872]: E0203 07:15:00.183432 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a47554e-d09d-468c-8226-4d2458444be2" containerName="extract-content" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.183438 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a47554e-d09d-468c-8226-4d2458444be2" containerName="extract-content" Feb 03 07:15:00 crc kubenswrapper[4872]: E0203 07:15:00.183452 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a47554e-d09d-468c-8226-4d2458444be2" containerName="extract-utilities" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.183458 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a47554e-d09d-468c-8226-4d2458444be2" containerName="extract-utilities" Feb 03 07:15:00 crc kubenswrapper[4872]: E0203 07:15:00.183470 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3021e59-d244-4761-8e4c-605be0862cd6" containerName="container-00" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.183476 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3021e59-d244-4761-8e4c-605be0862cd6" containerName="container-00" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.183660 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a47554e-d09d-468c-8226-4d2458444be2" containerName="registry-server" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.183706 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3021e59-d244-4761-8e4c-605be0862cd6" containerName="container-00" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.184348 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.186460 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.187319 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.198926 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l"] Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.269122 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-secret-volume\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.269184 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gh5r\" (UniqueName: \"kubernetes.io/projected/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-kube-api-access-4gh5r\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.269209 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-config-volume\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.371386 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-secret-volume\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.371476 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gh5r\" (UniqueName: \"kubernetes.io/projected/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-kube-api-access-4gh5r\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.371505 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-config-volume\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.372526 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-config-volume\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.380811 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-secret-volume\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.401631 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gh5r\" (UniqueName: \"kubernetes.io/projected/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-kube-api-access-4gh5r\") pod \"collect-profiles-29501715-zc57l\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:00 crc kubenswrapper[4872]: I0203 07:15:00.503149 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:01 crc kubenswrapper[4872]: I0203 07:15:01.045421 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l"] Feb 03 07:15:01 crc kubenswrapper[4872]: I0203 07:15:01.059314 4872 scope.go:117] "RemoveContainer" containerID="c1e0d1410d252555d0eebb8b887248d9ef62ff9b9273bdbd244078b2c249143d" Feb 03 07:15:01 crc kubenswrapper[4872]: I0203 07:15:01.863936 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" event={"ID":"27306b0c-4a27-4f77-9c9d-45f528f8c1d3","Type":"ContainerStarted","Data":"1e1b6767e0a7952aeaa74c52201d2d2ece7acc8718175cb6efd6ce6302754279"} Feb 03 07:15:01 crc kubenswrapper[4872]: I0203 07:15:01.865428 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" event={"ID":"27306b0c-4a27-4f77-9c9d-45f528f8c1d3","Type":"ContainerStarted","Data":"80799b6bfb13cc585070b8fb321a03d6509ae814246094c95a559f5f657395fe"} Feb 03 07:15:01 crc kubenswrapper[4872]: I0203 07:15:01.903672 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" podStartSLOduration=1.903655645 podStartE2EDuration="1.903655645s" podCreationTimestamp="2026-02-03 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 07:15:01.88773907 +0000 UTC m=+4472.470430494" watchObservedRunningTime="2026-02-03 07:15:01.903655645 +0000 UTC m=+4472.486347049" Feb 03 07:15:02 crc kubenswrapper[4872]: I0203 07:15:02.876731 4872 generic.go:334] "Generic (PLEG): container finished" podID="27306b0c-4a27-4f77-9c9d-45f528f8c1d3" containerID="1e1b6767e0a7952aeaa74c52201d2d2ece7acc8718175cb6efd6ce6302754279" exitCode=0 Feb 03 07:15:02 crc kubenswrapper[4872]: I0203 07:15:02.876811 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" event={"ID":"27306b0c-4a27-4f77-9c9d-45f528f8c1d3","Type":"ContainerDied","Data":"1e1b6767e0a7952aeaa74c52201d2d2ece7acc8718175cb6efd6ce6302754279"} Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.359077 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.471024 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-secret-volume\") pod \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.471267 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gh5r\" (UniqueName: \"kubernetes.io/projected/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-kube-api-access-4gh5r\") pod \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.471377 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-config-volume\") pod \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\" (UID: \"27306b0c-4a27-4f77-9c9d-45f528f8c1d3\") " Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.471877 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-config-volume" (OuterVolumeSpecName: "config-volume") pod "27306b0c-4a27-4f77-9c9d-45f528f8c1d3" (UID: "27306b0c-4a27-4f77-9c9d-45f528f8c1d3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.477628 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-kube-api-access-4gh5r" (OuterVolumeSpecName: "kube-api-access-4gh5r") pod "27306b0c-4a27-4f77-9c9d-45f528f8c1d3" (UID: "27306b0c-4a27-4f77-9c9d-45f528f8c1d3"). InnerVolumeSpecName "kube-api-access-4gh5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.480906 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27306b0c-4a27-4f77-9c9d-45f528f8c1d3" (UID: "27306b0c-4a27-4f77-9c9d-45f528f8c1d3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.573789 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gh5r\" (UniqueName: \"kubernetes.io/projected/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-kube-api-access-4gh5r\") on node \"crc\" DevicePath \"\"" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.573837 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.573855 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27306b0c-4a27-4f77-9c9d-45f528f8c1d3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.920308 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" event={"ID":"27306b0c-4a27-4f77-9c9d-45f528f8c1d3","Type":"ContainerDied","Data":"80799b6bfb13cc585070b8fb321a03d6509ae814246094c95a559f5f657395fe"} Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.920365 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80799b6bfb13cc585070b8fb321a03d6509ae814246094c95a559f5f657395fe" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.920449 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501715-zc57l" Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.986794 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh"] Feb 03 07:15:04 crc kubenswrapper[4872]: I0203 07:15:04.994172 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501670-d55zh"] Feb 03 07:15:06 crc kubenswrapper[4872]: I0203 07:15:06.134497 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4dd6034-4be8-40ee-91c7-479015131095" path="/var/lib/kubelet/pods/c4dd6034-4be8-40ee-91c7-479015131095/volumes" Feb 03 07:15:11 crc kubenswrapper[4872]: I0203 07:15:11.123732 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:15:11 crc kubenswrapper[4872]: E0203 07:15:11.126091 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:15:26 crc kubenswrapper[4872]: I0203 07:15:26.128769 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:15:26 crc kubenswrapper[4872]: E0203 07:15:26.129891 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:15:34 crc kubenswrapper[4872]: E0203 07:15:34.643535 4872 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9aec36b3_fc51_4b2e_9529_b55afc974191.slice/crio-a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f.scope\": RecentStats: unable to find data in memory cache]" Feb 03 07:15:35 crc kubenswrapper[4872]: I0203 07:15:35.225036 4872 generic.go:334] "Generic (PLEG): container finished" podID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerID="a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f" exitCode=0 Feb 03 07:15:35 crc kubenswrapper[4872]: I0203 07:15:35.225075 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6qslc/must-gather-29slc" event={"ID":"9aec36b3-fc51-4b2e-9529-b55afc974191","Type":"ContainerDied","Data":"a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f"} Feb 03 07:15:35 crc kubenswrapper[4872]: I0203 07:15:35.225632 4872 scope.go:117] "RemoveContainer" containerID="a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f" Feb 03 07:15:35 crc kubenswrapper[4872]: I0203 07:15:35.659189 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6qslc_must-gather-29slc_9aec36b3-fc51-4b2e-9529-b55afc974191/gather/0.log" Feb 03 07:15:39 crc kubenswrapper[4872]: I0203 07:15:39.123201 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:15:39 crc kubenswrapper[4872]: E0203 07:15:39.123978 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:15:44 crc kubenswrapper[4872]: I0203 07:15:44.544674 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6qslc/must-gather-29slc"] Feb 03 07:15:44 crc kubenswrapper[4872]: I0203 07:15:44.545482 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6qslc/must-gather-29slc" podUID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerName="copy" containerID="cri-o://a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795" gracePeriod=2 Feb 03 07:15:44 crc kubenswrapper[4872]: I0203 07:15:44.562274 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6qslc/must-gather-29slc"] Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.094771 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6qslc_must-gather-29slc_9aec36b3-fc51-4b2e-9529-b55afc974191/copy/0.log" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.095775 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.226360 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9aec36b3-fc51-4b2e-9529-b55afc974191-must-gather-output\") pod \"9aec36b3-fc51-4b2e-9529-b55afc974191\" (UID: \"9aec36b3-fc51-4b2e-9529-b55afc974191\") " Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.226583 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4l2n\" (UniqueName: \"kubernetes.io/projected/9aec36b3-fc51-4b2e-9529-b55afc974191-kube-api-access-h4l2n\") pod \"9aec36b3-fc51-4b2e-9529-b55afc974191\" (UID: \"9aec36b3-fc51-4b2e-9529-b55afc974191\") " Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.252052 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aec36b3-fc51-4b2e-9529-b55afc974191-kube-api-access-h4l2n" (OuterVolumeSpecName: "kube-api-access-h4l2n") pod "9aec36b3-fc51-4b2e-9529-b55afc974191" (UID: "9aec36b3-fc51-4b2e-9529-b55afc974191"). InnerVolumeSpecName "kube-api-access-h4l2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.308280 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6qslc_must-gather-29slc_9aec36b3-fc51-4b2e-9529-b55afc974191/copy/0.log" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.308591 4872 generic.go:334] "Generic (PLEG): container finished" podID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerID="a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795" exitCode=143 Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.308747 4872 scope.go:117] "RemoveContainer" containerID="a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.308862 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6qslc/must-gather-29slc" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.328643 4872 scope.go:117] "RemoveContainer" containerID="a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.329917 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4l2n\" (UniqueName: \"kubernetes.io/projected/9aec36b3-fc51-4b2e-9529-b55afc974191-kube-api-access-h4l2n\") on node \"crc\" DevicePath \"\"" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.442280 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aec36b3-fc51-4b2e-9529-b55afc974191-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9aec36b3-fc51-4b2e-9529-b55afc974191" (UID: "9aec36b3-fc51-4b2e-9529-b55afc974191"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.455443 4872 scope.go:117] "RemoveContainer" containerID="a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795" Feb 03 07:15:45 crc kubenswrapper[4872]: E0203 07:15:45.466820 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795\": container with ID starting with a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795 not found: ID does not exist" containerID="a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.466867 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795"} err="failed to get container status \"a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795\": rpc error: code = NotFound desc = could not find container \"a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795\": container with ID starting with a0db8a0de4f561d6bc3d1d8aa003cbf16cb6f01f72d1f4a4c99ba8d9bd114795 not found: ID does not exist" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.466894 4872 scope.go:117] "RemoveContainer" containerID="a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f" Feb 03 07:15:45 crc kubenswrapper[4872]: E0203 07:15:45.468053 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f\": container with ID starting with a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f not found: ID does not exist" containerID="a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.468079 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f"} err="failed to get container status \"a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f\": rpc error: code = NotFound desc = could not find container \"a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f\": container with ID starting with a882f8f32983a8857163c047e98069cd4463ca3df2fb27d280c333d8707c2a7f not found: ID does not exist" Feb 03 07:15:45 crc kubenswrapper[4872]: I0203 07:15:45.534127 4872 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9aec36b3-fc51-4b2e-9529-b55afc974191-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 07:15:46 crc kubenswrapper[4872]: I0203 07:15:46.132706 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aec36b3-fc51-4b2e-9529-b55afc974191" path="/var/lib/kubelet/pods/9aec36b3-fc51-4b2e-9529-b55afc974191/volumes" Feb 03 07:15:54 crc kubenswrapper[4872]: I0203 07:15:54.123243 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:15:54 crc kubenswrapper[4872]: E0203 07:15:54.124975 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:16:01 crc kubenswrapper[4872]: I0203 07:16:01.135196 4872 scope.go:117] "RemoveContainer" containerID="b630b27f387eaff76bbb1025a9c8f85eaf1464021f648ac627668b36ab4c3ed5" Feb 03 07:16:01 crc kubenswrapper[4872]: I0203 07:16:01.163643 4872 scope.go:117] "RemoveContainer" containerID="38eef8d28ab440c9b22f45d3c4864eec4772772c34980450ad993ef52370667a" Feb 03 07:16:06 crc kubenswrapper[4872]: I0203 07:16:06.123894 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:16:06 crc kubenswrapper[4872]: I0203 07:16:06.497759 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"dfce94cdfd61e5881eaf79368529a71f1fe987514ba0a8af83843c863c3af6c0"} Feb 03 07:17:04 crc kubenswrapper[4872]: I0203 07:17:04.653281 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5a46bed9-4154-4a62-8805-fe67c55a2d89" containerName="galera" probeResult="failure" output="command timed out" Feb 03 07:17:23 crc kubenswrapper[4872]: E0203 07:17:23.295129 4872 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.088s" Feb 03 07:18:14 crc kubenswrapper[4872]: I0203 07:18:14.654728 4872 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5a46bed9-4154-4a62-8805-fe67c55a2d89" containerName="galera" probeResult="failure" output="command timed out" Feb 03 07:18:31 crc kubenswrapper[4872]: I0203 07:18:31.271914 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:18:31 crc kubenswrapper[4872]: I0203 07:18:31.272618 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.092340 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rtvzh"] Feb 03 07:18:47 crc kubenswrapper[4872]: E0203 07:18:47.093563 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerName="gather" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.093586 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerName="gather" Feb 03 07:18:47 crc kubenswrapper[4872]: E0203 07:18:47.093615 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27306b0c-4a27-4f77-9c9d-45f528f8c1d3" containerName="collect-profiles" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.093629 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="27306b0c-4a27-4f77-9c9d-45f528f8c1d3" containerName="collect-profiles" Feb 03 07:18:47 crc kubenswrapper[4872]: E0203 07:18:47.093658 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerName="copy" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.093672 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerName="copy" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.094058 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerName="copy" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.094092 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="27306b0c-4a27-4f77-9c9d-45f528f8c1d3" containerName="collect-profiles" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.094139 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aec36b3-fc51-4b2e-9529-b55afc974191" containerName="gather" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.096605 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.108306 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtvzh"] Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.162065 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hww4v\" (UniqueName: \"kubernetes.io/projected/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-kube-api-access-hww4v\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.162180 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-catalog-content\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.162235 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-utilities\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.263581 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hww4v\" (UniqueName: \"kubernetes.io/projected/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-kube-api-access-hww4v\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.263674 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-catalog-content\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.263731 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-utilities\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.264768 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-catalog-content\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.264992 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-utilities\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.298140 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hww4v\" (UniqueName: \"kubernetes.io/projected/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-kube-api-access-hww4v\") pod \"redhat-marketplace-rtvzh\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.421012 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:47 crc kubenswrapper[4872]: I0203 07:18:47.886874 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtvzh"] Feb 03 07:18:47 crc kubenswrapper[4872]: W0203 07:18:47.910471 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde7bc4c8_9b30_4fbc_8342_dbaebcd01387.slice/crio-fcb2784509933645b53ca4496b89249f3160f7cd0b430dd855fdc6a3940f05d2 WatchSource:0}: Error finding container fcb2784509933645b53ca4496b89249f3160f7cd0b430dd855fdc6a3940f05d2: Status 404 returned error can't find the container with id fcb2784509933645b53ca4496b89249f3160f7cd0b430dd855fdc6a3940f05d2 Feb 03 07:18:48 crc kubenswrapper[4872]: I0203 07:18:48.189405 4872 generic.go:334] "Generic (PLEG): container finished" podID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerID="e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e" exitCode=0 Feb 03 07:18:48 crc kubenswrapper[4872]: I0203 07:18:48.191568 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtvzh" event={"ID":"de7bc4c8-9b30-4fbc-8342-dbaebcd01387","Type":"ContainerDied","Data":"e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e"} Feb 03 07:18:48 crc kubenswrapper[4872]: I0203 07:18:48.191766 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtvzh" event={"ID":"de7bc4c8-9b30-4fbc-8342-dbaebcd01387","Type":"ContainerStarted","Data":"fcb2784509933645b53ca4496b89249f3160f7cd0b430dd855fdc6a3940f05d2"} Feb 03 07:18:48 crc kubenswrapper[4872]: I0203 07:18:48.194877 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 07:18:49 crc kubenswrapper[4872]: I0203 07:18:49.201195 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtvzh" event={"ID":"de7bc4c8-9b30-4fbc-8342-dbaebcd01387","Type":"ContainerStarted","Data":"fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c"} Feb 03 07:18:50 crc kubenswrapper[4872]: I0203 07:18:50.214313 4872 generic.go:334] "Generic (PLEG): container finished" podID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerID="fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c" exitCode=0 Feb 03 07:18:50 crc kubenswrapper[4872]: I0203 07:18:50.214514 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtvzh" event={"ID":"de7bc4c8-9b30-4fbc-8342-dbaebcd01387","Type":"ContainerDied","Data":"fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c"} Feb 03 07:18:51 crc kubenswrapper[4872]: I0203 07:18:51.227609 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtvzh" event={"ID":"de7bc4c8-9b30-4fbc-8342-dbaebcd01387","Type":"ContainerStarted","Data":"849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8"} Feb 03 07:18:51 crc kubenswrapper[4872]: I0203 07:18:51.258377 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rtvzh" podStartSLOduration=1.765177339 podStartE2EDuration="4.258355084s" podCreationTimestamp="2026-02-03 07:18:47 +0000 UTC" firstStartedPulling="2026-02-03 07:18:48.194426798 +0000 UTC m=+4698.777118222" lastFinishedPulling="2026-02-03 07:18:50.687604543 +0000 UTC m=+4701.270295967" observedRunningTime="2026-02-03 07:18:51.254602464 +0000 UTC m=+4701.837293878" watchObservedRunningTime="2026-02-03 07:18:51.258355084 +0000 UTC m=+4701.841046518" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.684377 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnxbd"] Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.688368 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.695847 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnxbd"] Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.828056 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htvwt\" (UniqueName: \"kubernetes.io/projected/e63d0e3b-9517-4eba-a976-4dd400efe271-kube-api-access-htvwt\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.828158 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-catalog-content\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.828431 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-utilities\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.930006 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-catalog-content\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.930096 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-utilities\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.930203 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htvwt\" (UniqueName: \"kubernetes.io/projected/e63d0e3b-9517-4eba-a976-4dd400efe271-kube-api-access-htvwt\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.930511 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-catalog-content\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.930601 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-utilities\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:55 crc kubenswrapper[4872]: I0203 07:18:55.971500 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htvwt\" (UniqueName: \"kubernetes.io/projected/e63d0e3b-9517-4eba-a976-4dd400efe271-kube-api-access-htvwt\") pod \"community-operators-wnxbd\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:56 crc kubenswrapper[4872]: I0203 07:18:56.027714 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:18:56 crc kubenswrapper[4872]: I0203 07:18:56.608067 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnxbd"] Feb 03 07:18:57 crc kubenswrapper[4872]: I0203 07:18:57.310073 4872 generic.go:334] "Generic (PLEG): container finished" podID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerID="0fa0b39e6d42930e5052d6a971ce0a406e96921a36dd8171daca01923e7755e6" exitCode=0 Feb 03 07:18:57 crc kubenswrapper[4872]: I0203 07:18:57.310201 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnxbd" event={"ID":"e63d0e3b-9517-4eba-a976-4dd400efe271","Type":"ContainerDied","Data":"0fa0b39e6d42930e5052d6a971ce0a406e96921a36dd8171daca01923e7755e6"} Feb 03 07:18:57 crc kubenswrapper[4872]: I0203 07:18:57.310470 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnxbd" event={"ID":"e63d0e3b-9517-4eba-a976-4dd400efe271","Type":"ContainerStarted","Data":"0ae1da592bf6053387f8e895757a9881349cdb66a709e2ec5385cb7830e81f16"} Feb 03 07:18:57 crc kubenswrapper[4872]: I0203 07:18:57.422015 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:57 crc kubenswrapper[4872]: I0203 07:18:57.422076 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:57 crc kubenswrapper[4872]: I0203 07:18:57.495366 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:58 crc kubenswrapper[4872]: I0203 07:18:58.322348 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnxbd" event={"ID":"e63d0e3b-9517-4eba-a976-4dd400efe271","Type":"ContainerStarted","Data":"99453294b5d677bcc52c90ba61dcc22cddeefa97e86f16579ec32b5ab815e002"} Feb 03 07:18:58 crc kubenswrapper[4872]: I0203 07:18:58.395123 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:18:59 crc kubenswrapper[4872]: I0203 07:18:59.854079 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtvzh"] Feb 03 07:19:00 crc kubenswrapper[4872]: I0203 07:19:00.343136 4872 generic.go:334] "Generic (PLEG): container finished" podID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerID="99453294b5d677bcc52c90ba61dcc22cddeefa97e86f16579ec32b5ab815e002" exitCode=0 Feb 03 07:19:00 crc kubenswrapper[4872]: I0203 07:19:00.343217 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnxbd" event={"ID":"e63d0e3b-9517-4eba-a976-4dd400efe271","Type":"ContainerDied","Data":"99453294b5d677bcc52c90ba61dcc22cddeefa97e86f16579ec32b5ab815e002"} Feb 03 07:19:00 crc kubenswrapper[4872]: I0203 07:19:00.343393 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rtvzh" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerName="registry-server" containerID="cri-o://849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8" gracePeriod=2 Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.041210 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.145304 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-catalog-content\") pod \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.145385 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hww4v\" (UniqueName: \"kubernetes.io/projected/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-kube-api-access-hww4v\") pod \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.145433 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-utilities\") pod \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\" (UID: \"de7bc4c8-9b30-4fbc-8342-dbaebcd01387\") " Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.146405 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-utilities" (OuterVolumeSpecName: "utilities") pod "de7bc4c8-9b30-4fbc-8342-dbaebcd01387" (UID: "de7bc4c8-9b30-4fbc-8342-dbaebcd01387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.152620 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-kube-api-access-hww4v" (OuterVolumeSpecName: "kube-api-access-hww4v") pod "de7bc4c8-9b30-4fbc-8342-dbaebcd01387" (UID: "de7bc4c8-9b30-4fbc-8342-dbaebcd01387"). InnerVolumeSpecName "kube-api-access-hww4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.170200 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de7bc4c8-9b30-4fbc-8342-dbaebcd01387" (UID: "de7bc4c8-9b30-4fbc-8342-dbaebcd01387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.247257 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.247289 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hww4v\" (UniqueName: \"kubernetes.io/projected/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-kube-api-access-hww4v\") on node \"crc\" DevicePath \"\"" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.247301 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de7bc4c8-9b30-4fbc-8342-dbaebcd01387-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.271284 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.271510 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.352389 4872 generic.go:334] "Generic (PLEG): container finished" podID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerID="849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8" exitCode=0 Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.352452 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtvzh" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.352438 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtvzh" event={"ID":"de7bc4c8-9b30-4fbc-8342-dbaebcd01387","Type":"ContainerDied","Data":"849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8"} Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.352863 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtvzh" event={"ID":"de7bc4c8-9b30-4fbc-8342-dbaebcd01387","Type":"ContainerDied","Data":"fcb2784509933645b53ca4496b89249f3160f7cd0b430dd855fdc6a3940f05d2"} Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.352890 4872 scope.go:117] "RemoveContainer" containerID="849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.355503 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnxbd" event={"ID":"e63d0e3b-9517-4eba-a976-4dd400efe271","Type":"ContainerStarted","Data":"31ddf045efbaf557f2f8914469e37835355b9dabee51fa9b03f3ec955c811702"} Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.372328 4872 scope.go:117] "RemoveContainer" containerID="fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.381325 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnxbd" podStartSLOduration=2.8757675799999998 podStartE2EDuration="6.381309331s" podCreationTimestamp="2026-02-03 07:18:55 +0000 UTC" firstStartedPulling="2026-02-03 07:18:57.313333631 +0000 UTC m=+4707.896025035" lastFinishedPulling="2026-02-03 07:19:00.818875372 +0000 UTC m=+4711.401566786" observedRunningTime="2026-02-03 07:19:01.379480307 +0000 UTC m=+4711.962171721" watchObservedRunningTime="2026-02-03 07:19:01.381309331 +0000 UTC m=+4711.964000745" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.397957 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtvzh"] Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.398107 4872 scope.go:117] "RemoveContainer" containerID="e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.413440 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtvzh"] Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.441306 4872 scope.go:117] "RemoveContainer" containerID="849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8" Feb 03 07:19:01 crc kubenswrapper[4872]: E0203 07:19:01.442278 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8\": container with ID starting with 849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8 not found: ID does not exist" containerID="849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.442346 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8"} err="failed to get container status \"849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8\": rpc error: code = NotFound desc = could not find container \"849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8\": container with ID starting with 849ac835723e8485027c0b33b32198240a859cfc91ad2ebe19d3a589093caaf8 not found: ID does not exist" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.442388 4872 scope.go:117] "RemoveContainer" containerID="fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c" Feb 03 07:19:01 crc kubenswrapper[4872]: E0203 07:19:01.442736 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c\": container with ID starting with fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c not found: ID does not exist" containerID="fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.442766 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c"} err="failed to get container status \"fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c\": rpc error: code = NotFound desc = could not find container \"fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c\": container with ID starting with fc0ca7dcbbca112b70bc5804f210240cc32fae365bbf6ef853df979243c7b38c not found: ID does not exist" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.442788 4872 scope.go:117] "RemoveContainer" containerID="e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e" Feb 03 07:19:01 crc kubenswrapper[4872]: E0203 07:19:01.443127 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e\": container with ID starting with e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e not found: ID does not exist" containerID="e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e" Feb 03 07:19:01 crc kubenswrapper[4872]: I0203 07:19:01.443147 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e"} err="failed to get container status \"e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e\": rpc error: code = NotFound desc = could not find container \"e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e\": container with ID starting with e7c0196460d8dbeb35b00bdc222791c1e6665b72d297dda706f60fe9a671209e not found: ID does not exist" Feb 03 07:19:02 crc kubenswrapper[4872]: I0203 07:19:02.137070 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" path="/var/lib/kubelet/pods/de7bc4c8-9b30-4fbc-8342-dbaebcd01387/volumes" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.037232 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gmvpd/must-gather-w2dql"] Feb 03 07:19:03 crc kubenswrapper[4872]: E0203 07:19:03.037786 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerName="registry-server" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.037804 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerName="registry-server" Feb 03 07:19:03 crc kubenswrapper[4872]: E0203 07:19:03.037818 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerName="extract-utilities" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.037825 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerName="extract-utilities" Feb 03 07:19:03 crc kubenswrapper[4872]: E0203 07:19:03.037857 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerName="extract-content" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.037863 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerName="extract-content" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.038014 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="de7bc4c8-9b30-4fbc-8342-dbaebcd01387" containerName="registry-server" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.038923 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.044142 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gmvpd"/"openshift-service-ca.crt" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.044201 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gmvpd"/"kube-root-ca.crt" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.056301 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gmvpd/must-gather-w2dql"] Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.182215 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n82f\" (UniqueName: \"kubernetes.io/projected/2c592100-3d66-4d73-9d4e-a18520973fb9-kube-api-access-8n82f\") pod \"must-gather-w2dql\" (UID: \"2c592100-3d66-4d73-9d4e-a18520973fb9\") " pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.182953 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c592100-3d66-4d73-9d4e-a18520973fb9-must-gather-output\") pod \"must-gather-w2dql\" (UID: \"2c592100-3d66-4d73-9d4e-a18520973fb9\") " pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.284213 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c592100-3d66-4d73-9d4e-a18520973fb9-must-gather-output\") pod \"must-gather-w2dql\" (UID: \"2c592100-3d66-4d73-9d4e-a18520973fb9\") " pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.284286 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n82f\" (UniqueName: \"kubernetes.io/projected/2c592100-3d66-4d73-9d4e-a18520973fb9-kube-api-access-8n82f\") pod \"must-gather-w2dql\" (UID: \"2c592100-3d66-4d73-9d4e-a18520973fb9\") " pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.284961 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c592100-3d66-4d73-9d4e-a18520973fb9-must-gather-output\") pod \"must-gather-w2dql\" (UID: \"2c592100-3d66-4d73-9d4e-a18520973fb9\") " pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.304521 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n82f\" (UniqueName: \"kubernetes.io/projected/2c592100-3d66-4d73-9d4e-a18520973fb9-kube-api-access-8n82f\") pod \"must-gather-w2dql\" (UID: \"2c592100-3d66-4d73-9d4e-a18520973fb9\") " pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.378162 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:19:03 crc kubenswrapper[4872]: I0203 07:19:03.951485 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gmvpd/must-gather-w2dql"] Feb 03 07:19:04 crc kubenswrapper[4872]: I0203 07:19:04.416914 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/must-gather-w2dql" event={"ID":"2c592100-3d66-4d73-9d4e-a18520973fb9","Type":"ContainerStarted","Data":"cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f"} Feb 03 07:19:04 crc kubenswrapper[4872]: I0203 07:19:04.417310 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/must-gather-w2dql" event={"ID":"2c592100-3d66-4d73-9d4e-a18520973fb9","Type":"ContainerStarted","Data":"840f71fa09f73b76b3dc306f55393ee51682e0650661497382781217ee539b24"} Feb 03 07:19:05 crc kubenswrapper[4872]: I0203 07:19:05.425193 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/must-gather-w2dql" event={"ID":"2c592100-3d66-4d73-9d4e-a18520973fb9","Type":"ContainerStarted","Data":"34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c"} Feb 03 07:19:05 crc kubenswrapper[4872]: I0203 07:19:05.450490 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gmvpd/must-gather-w2dql" podStartSLOduration=2.45046621 podStartE2EDuration="2.45046621s" podCreationTimestamp="2026-02-03 07:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 07:19:05.440726584 +0000 UTC m=+4716.023418028" watchObservedRunningTime="2026-02-03 07:19:05.45046621 +0000 UTC m=+4716.033157634" Feb 03 07:19:06 crc kubenswrapper[4872]: I0203 07:19:06.028739 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:19:06 crc kubenswrapper[4872]: I0203 07:19:06.028784 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:19:06 crc kubenswrapper[4872]: I0203 07:19:06.084494 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:19:06 crc kubenswrapper[4872]: I0203 07:19:06.494923 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:19:06 crc kubenswrapper[4872]: I0203 07:19:06.593982 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnxbd"] Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.149040 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-cwrs8"] Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.150346 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.153473 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gmvpd"/"default-dockercfg-v5whp" Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.281453 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5l8r\" (UniqueName: \"kubernetes.io/projected/270dea49-e258-4d18-ace0-89f8a2b73b62-kube-api-access-v5l8r\") pod \"crc-debug-cwrs8\" (UID: \"270dea49-e258-4d18-ace0-89f8a2b73b62\") " pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.281524 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/270dea49-e258-4d18-ace0-89f8a2b73b62-host\") pod \"crc-debug-cwrs8\" (UID: \"270dea49-e258-4d18-ace0-89f8a2b73b62\") " pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.382847 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/270dea49-e258-4d18-ace0-89f8a2b73b62-host\") pod \"crc-debug-cwrs8\" (UID: \"270dea49-e258-4d18-ace0-89f8a2b73b62\") " pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.382977 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/270dea49-e258-4d18-ace0-89f8a2b73b62-host\") pod \"crc-debug-cwrs8\" (UID: \"270dea49-e258-4d18-ace0-89f8a2b73b62\") " pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.383093 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5l8r\" (UniqueName: \"kubernetes.io/projected/270dea49-e258-4d18-ace0-89f8a2b73b62-kube-api-access-v5l8r\") pod \"crc-debug-cwrs8\" (UID: \"270dea49-e258-4d18-ace0-89f8a2b73b62\") " pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.422176 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5l8r\" (UniqueName: \"kubernetes.io/projected/270dea49-e258-4d18-ace0-89f8a2b73b62-kube-api-access-v5l8r\") pod \"crc-debug-cwrs8\" (UID: \"270dea49-e258-4d18-ace0-89f8a2b73b62\") " pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.452739 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnxbd" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerName="registry-server" containerID="cri-o://31ddf045efbaf557f2f8914469e37835355b9dabee51fa9b03f3ec955c811702" gracePeriod=2 Feb 03 07:19:08 crc kubenswrapper[4872]: I0203 07:19:08.468057 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.463224 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" event={"ID":"270dea49-e258-4d18-ace0-89f8a2b73b62","Type":"ContainerStarted","Data":"264e0494deb4e4ec433de9f5c08d19b6d9af5e0bf2f79b2f84bd80c37e74b773"} Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.463705 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" event={"ID":"270dea49-e258-4d18-ace0-89f8a2b73b62","Type":"ContainerStarted","Data":"babdd4f097d229a355dd64da5b3196c29c27a1f4ed18b49bb78eb049048edf4e"} Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.470235 4872 generic.go:334] "Generic (PLEG): container finished" podID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerID="31ddf045efbaf557f2f8914469e37835355b9dabee51fa9b03f3ec955c811702" exitCode=0 Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.470275 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnxbd" event={"ID":"e63d0e3b-9517-4eba-a976-4dd400efe271","Type":"ContainerDied","Data":"31ddf045efbaf557f2f8914469e37835355b9dabee51fa9b03f3ec955c811702"} Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.494211 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" podStartSLOduration=1.494186662 podStartE2EDuration="1.494186662s" podCreationTimestamp="2026-02-03 07:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 07:19:09.483450672 +0000 UTC m=+4720.066142086" watchObservedRunningTime="2026-02-03 07:19:09.494186662 +0000 UTC m=+4720.076878076" Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.552042 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.613741 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htvwt\" (UniqueName: \"kubernetes.io/projected/e63d0e3b-9517-4eba-a976-4dd400efe271-kube-api-access-htvwt\") pod \"e63d0e3b-9517-4eba-a976-4dd400efe271\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.613923 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-utilities\") pod \"e63d0e3b-9517-4eba-a976-4dd400efe271\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.613951 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-catalog-content\") pod \"e63d0e3b-9517-4eba-a976-4dd400efe271\" (UID: \"e63d0e3b-9517-4eba-a976-4dd400efe271\") " Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.621846 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e63d0e3b-9517-4eba-a976-4dd400efe271-kube-api-access-htvwt" (OuterVolumeSpecName: "kube-api-access-htvwt") pod "e63d0e3b-9517-4eba-a976-4dd400efe271" (UID: "e63d0e3b-9517-4eba-a976-4dd400efe271"). InnerVolumeSpecName "kube-api-access-htvwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.633677 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-utilities" (OuterVolumeSpecName: "utilities") pod "e63d0e3b-9517-4eba-a976-4dd400efe271" (UID: "e63d0e3b-9517-4eba-a976-4dd400efe271"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.673162 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e63d0e3b-9517-4eba-a976-4dd400efe271" (UID: "e63d0e3b-9517-4eba-a976-4dd400efe271"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.716207 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.716232 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e63d0e3b-9517-4eba-a976-4dd400efe271-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:19:09 crc kubenswrapper[4872]: I0203 07:19:09.716242 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htvwt\" (UniqueName: \"kubernetes.io/projected/e63d0e3b-9517-4eba-a976-4dd400efe271-kube-api-access-htvwt\") on node \"crc\" DevicePath \"\"" Feb 03 07:19:10 crc kubenswrapper[4872]: I0203 07:19:10.479517 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnxbd" event={"ID":"e63d0e3b-9517-4eba-a976-4dd400efe271","Type":"ContainerDied","Data":"0ae1da592bf6053387f8e895757a9881349cdb66a709e2ec5385cb7830e81f16"} Feb 03 07:19:10 crc kubenswrapper[4872]: I0203 07:19:10.479793 4872 scope.go:117] "RemoveContainer" containerID="31ddf045efbaf557f2f8914469e37835355b9dabee51fa9b03f3ec955c811702" Feb 03 07:19:10 crc kubenswrapper[4872]: I0203 07:19:10.479592 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnxbd" Feb 03 07:19:10 crc kubenswrapper[4872]: I0203 07:19:10.507274 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnxbd"] Feb 03 07:19:10 crc kubenswrapper[4872]: I0203 07:19:10.520887 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnxbd"] Feb 03 07:19:10 crc kubenswrapper[4872]: I0203 07:19:10.524849 4872 scope.go:117] "RemoveContainer" containerID="99453294b5d677bcc52c90ba61dcc22cddeefa97e86f16579ec32b5ab815e002" Feb 03 07:19:10 crc kubenswrapper[4872]: I0203 07:19:10.558627 4872 scope.go:117] "RemoveContainer" containerID="0fa0b39e6d42930e5052d6a971ce0a406e96921a36dd8171daca01923e7755e6" Feb 03 07:19:12 crc kubenswrapper[4872]: I0203 07:19:12.134388 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" path="/var/lib/kubelet/pods/e63d0e3b-9517-4eba-a976-4dd400efe271/volumes" Feb 03 07:19:31 crc kubenswrapper[4872]: I0203 07:19:31.270910 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:19:31 crc kubenswrapper[4872]: I0203 07:19:31.271493 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:19:31 crc kubenswrapper[4872]: I0203 07:19:31.271549 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 07:19:31 crc kubenswrapper[4872]: I0203 07:19:31.272334 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfce94cdfd61e5881eaf79368529a71f1fe987514ba0a8af83843c863c3af6c0"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 07:19:31 crc kubenswrapper[4872]: I0203 07:19:31.272406 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://dfce94cdfd61e5881eaf79368529a71f1fe987514ba0a8af83843c863c3af6c0" gracePeriod=600 Feb 03 07:19:31 crc kubenswrapper[4872]: I0203 07:19:31.713874 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="dfce94cdfd61e5881eaf79368529a71f1fe987514ba0a8af83843c863c3af6c0" exitCode=0 Feb 03 07:19:31 crc kubenswrapper[4872]: I0203 07:19:31.714041 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"dfce94cdfd61e5881eaf79368529a71f1fe987514ba0a8af83843c863c3af6c0"} Feb 03 07:19:31 crc kubenswrapper[4872]: I0203 07:19:31.714274 4872 scope.go:117] "RemoveContainer" containerID="49ac2d39a26b889d29c577e10031f9b0679674dd8760a781afe43e08d56650a4" Feb 03 07:19:32 crc kubenswrapper[4872]: I0203 07:19:32.743327 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16"} Feb 03 07:19:55 crc kubenswrapper[4872]: I0203 07:19:55.963330 4872 generic.go:334] "Generic (PLEG): container finished" podID="270dea49-e258-4d18-ace0-89f8a2b73b62" containerID="264e0494deb4e4ec433de9f5c08d19b6d9af5e0bf2f79b2f84bd80c37e74b773" exitCode=0 Feb 03 07:19:55 crc kubenswrapper[4872]: I0203 07:19:55.963417 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" event={"ID":"270dea49-e258-4d18-ace0-89f8a2b73b62","Type":"ContainerDied","Data":"264e0494deb4e4ec433de9f5c08d19b6d9af5e0bf2f79b2f84bd80c37e74b773"} Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.062581 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.098008 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-cwrs8"] Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.106143 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-cwrs8"] Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.163671 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/270dea49-e258-4d18-ace0-89f8a2b73b62-host\") pod \"270dea49-e258-4d18-ace0-89f8a2b73b62\" (UID: \"270dea49-e258-4d18-ace0-89f8a2b73b62\") " Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.164003 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5l8r\" (UniqueName: \"kubernetes.io/projected/270dea49-e258-4d18-ace0-89f8a2b73b62-kube-api-access-v5l8r\") pod \"270dea49-e258-4d18-ace0-89f8a2b73b62\" (UID: \"270dea49-e258-4d18-ace0-89f8a2b73b62\") " Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.163766 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/270dea49-e258-4d18-ace0-89f8a2b73b62-host" (OuterVolumeSpecName: "host") pod "270dea49-e258-4d18-ace0-89f8a2b73b62" (UID: "270dea49-e258-4d18-ace0-89f8a2b73b62"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.164616 4872 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/270dea49-e258-4d18-ace0-89f8a2b73b62-host\") on node \"crc\" DevicePath \"\"" Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.170858 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270dea49-e258-4d18-ace0-89f8a2b73b62-kube-api-access-v5l8r" (OuterVolumeSpecName: "kube-api-access-v5l8r") pod "270dea49-e258-4d18-ace0-89f8a2b73b62" (UID: "270dea49-e258-4d18-ace0-89f8a2b73b62"). InnerVolumeSpecName "kube-api-access-v5l8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.266201 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5l8r\" (UniqueName: \"kubernetes.io/projected/270dea49-e258-4d18-ace0-89f8a2b73b62-kube-api-access-v5l8r\") on node \"crc\" DevicePath \"\"" Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.980048 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="babdd4f097d229a355dd64da5b3196c29c27a1f4ed18b49bb78eb049048edf4e" Feb 03 07:19:57 crc kubenswrapper[4872]: I0203 07:19:57.980103 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-cwrs8" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.133407 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270dea49-e258-4d18-ace0-89f8a2b73b62" path="/var/lib/kubelet/pods/270dea49-e258-4d18-ace0-89f8a2b73b62/volumes" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.287048 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-bwf4b"] Feb 03 07:19:58 crc kubenswrapper[4872]: E0203 07:19:58.287412 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270dea49-e258-4d18-ace0-89f8a2b73b62" containerName="container-00" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.287426 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="270dea49-e258-4d18-ace0-89f8a2b73b62" containerName="container-00" Feb 03 07:19:58 crc kubenswrapper[4872]: E0203 07:19:58.287441 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerName="extract-content" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.287447 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerName="extract-content" Feb 03 07:19:58 crc kubenswrapper[4872]: E0203 07:19:58.287459 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerName="extract-utilities" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.287465 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerName="extract-utilities" Feb 03 07:19:58 crc kubenswrapper[4872]: E0203 07:19:58.287471 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerName="registry-server" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.287476 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerName="registry-server" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.287651 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e63d0e3b-9517-4eba-a976-4dd400efe271" containerName="registry-server" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.287671 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="270dea49-e258-4d18-ace0-89f8a2b73b62" containerName="container-00" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.288223 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.291474 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gmvpd"/"default-dockercfg-v5whp" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.385807 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48268431-79df-451c-83aa-75398945a3cf-host\") pod \"crc-debug-bwf4b\" (UID: \"48268431-79df-451c-83aa-75398945a3cf\") " pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.385973 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r86kd\" (UniqueName: \"kubernetes.io/projected/48268431-79df-451c-83aa-75398945a3cf-kube-api-access-r86kd\") pod \"crc-debug-bwf4b\" (UID: \"48268431-79df-451c-83aa-75398945a3cf\") " pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.487980 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r86kd\" (UniqueName: \"kubernetes.io/projected/48268431-79df-451c-83aa-75398945a3cf-kube-api-access-r86kd\") pod \"crc-debug-bwf4b\" (UID: \"48268431-79df-451c-83aa-75398945a3cf\") " pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.488060 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48268431-79df-451c-83aa-75398945a3cf-host\") pod \"crc-debug-bwf4b\" (UID: \"48268431-79df-451c-83aa-75398945a3cf\") " pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.488202 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48268431-79df-451c-83aa-75398945a3cf-host\") pod \"crc-debug-bwf4b\" (UID: \"48268431-79df-451c-83aa-75398945a3cf\") " pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.516594 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r86kd\" (UniqueName: \"kubernetes.io/projected/48268431-79df-451c-83aa-75398945a3cf-kube-api-access-r86kd\") pod \"crc-debug-bwf4b\" (UID: \"48268431-79df-451c-83aa-75398945a3cf\") " pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.612246 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.989927 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" event={"ID":"48268431-79df-451c-83aa-75398945a3cf","Type":"ContainerStarted","Data":"9b9c0bc64419e1d7050441e94f899ef4d0701a1f3ce58aec70cdad3d25a26a90"} Feb 03 07:19:58 crc kubenswrapper[4872]: I0203 07:19:58.990250 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" event={"ID":"48268431-79df-451c-83aa-75398945a3cf","Type":"ContainerStarted","Data":"2320f758173f13ee3660d528884b94d998b76d2bef3aee1d5fc7b551100822e6"} Feb 03 07:20:00 crc kubenswrapper[4872]: I0203 07:20:00.001343 4872 generic.go:334] "Generic (PLEG): container finished" podID="48268431-79df-451c-83aa-75398945a3cf" containerID="9b9c0bc64419e1d7050441e94f899ef4d0701a1f3ce58aec70cdad3d25a26a90" exitCode=0 Feb 03 07:20:00 crc kubenswrapper[4872]: I0203 07:20:00.001396 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" event={"ID":"48268431-79df-451c-83aa-75398945a3cf","Type":"ContainerDied","Data":"9b9c0bc64419e1d7050441e94f899ef4d0701a1f3ce58aec70cdad3d25a26a90"} Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.117318 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.154918 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-bwf4b"] Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.161891 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-bwf4b"] Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.238963 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48268431-79df-451c-83aa-75398945a3cf-host\") pod \"48268431-79df-451c-83aa-75398945a3cf\" (UID: \"48268431-79df-451c-83aa-75398945a3cf\") " Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.239052 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48268431-79df-451c-83aa-75398945a3cf-host" (OuterVolumeSpecName: "host") pod "48268431-79df-451c-83aa-75398945a3cf" (UID: "48268431-79df-451c-83aa-75398945a3cf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.239123 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r86kd\" (UniqueName: \"kubernetes.io/projected/48268431-79df-451c-83aa-75398945a3cf-kube-api-access-r86kd\") pod \"48268431-79df-451c-83aa-75398945a3cf\" (UID: \"48268431-79df-451c-83aa-75398945a3cf\") " Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.239634 4872 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/48268431-79df-451c-83aa-75398945a3cf-host\") on node \"crc\" DevicePath \"\"" Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.250960 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48268431-79df-451c-83aa-75398945a3cf-kube-api-access-r86kd" (OuterVolumeSpecName: "kube-api-access-r86kd") pod "48268431-79df-451c-83aa-75398945a3cf" (UID: "48268431-79df-451c-83aa-75398945a3cf"). InnerVolumeSpecName "kube-api-access-r86kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:20:01 crc kubenswrapper[4872]: I0203 07:20:01.341973 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r86kd\" (UniqueName: \"kubernetes.io/projected/48268431-79df-451c-83aa-75398945a3cf-kube-api-access-r86kd\") on node \"crc\" DevicePath \"\"" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.019005 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2320f758173f13ee3660d528884b94d998b76d2bef3aee1d5fc7b551100822e6" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.019069 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-bwf4b" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.133989 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48268431-79df-451c-83aa-75398945a3cf" path="/var/lib/kubelet/pods/48268431-79df-451c-83aa-75398945a3cf/volumes" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.328938 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-vh44m"] Feb 03 07:20:02 crc kubenswrapper[4872]: E0203 07:20:02.329347 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48268431-79df-451c-83aa-75398945a3cf" containerName="container-00" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.329363 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="48268431-79df-451c-83aa-75398945a3cf" containerName="container-00" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.329526 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="48268431-79df-451c-83aa-75398945a3cf" containerName="container-00" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.330127 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.332016 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gmvpd"/"default-dockercfg-v5whp" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.359313 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nd67\" (UniqueName: \"kubernetes.io/projected/59736de2-b29a-4a59-b742-ab8eeb731a70-kube-api-access-7nd67\") pod \"crc-debug-vh44m\" (UID: \"59736de2-b29a-4a59-b742-ab8eeb731a70\") " pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.359424 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59736de2-b29a-4a59-b742-ab8eeb731a70-host\") pod \"crc-debug-vh44m\" (UID: \"59736de2-b29a-4a59-b742-ab8eeb731a70\") " pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.460585 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nd67\" (UniqueName: \"kubernetes.io/projected/59736de2-b29a-4a59-b742-ab8eeb731a70-kube-api-access-7nd67\") pod \"crc-debug-vh44m\" (UID: \"59736de2-b29a-4a59-b742-ab8eeb731a70\") " pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.460859 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59736de2-b29a-4a59-b742-ab8eeb731a70-host\") pod \"crc-debug-vh44m\" (UID: \"59736de2-b29a-4a59-b742-ab8eeb731a70\") " pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.460953 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59736de2-b29a-4a59-b742-ab8eeb731a70-host\") pod \"crc-debug-vh44m\" (UID: \"59736de2-b29a-4a59-b742-ab8eeb731a70\") " pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.479585 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nd67\" (UniqueName: \"kubernetes.io/projected/59736de2-b29a-4a59-b742-ab8eeb731a70-kube-api-access-7nd67\") pod \"crc-debug-vh44m\" (UID: \"59736de2-b29a-4a59-b742-ab8eeb731a70\") " pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:02 crc kubenswrapper[4872]: I0203 07:20:02.663241 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:03 crc kubenswrapper[4872]: I0203 07:20:03.027206 4872 generic.go:334] "Generic (PLEG): container finished" podID="59736de2-b29a-4a59-b742-ab8eeb731a70" containerID="6aa8802d03698b9a767bdc9b2dda5b4b8181c96da8752b2e5e5e5c9117c6947f" exitCode=0 Feb 03 07:20:03 crc kubenswrapper[4872]: I0203 07:20:03.027378 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/crc-debug-vh44m" event={"ID":"59736de2-b29a-4a59-b742-ab8eeb731a70","Type":"ContainerDied","Data":"6aa8802d03698b9a767bdc9b2dda5b4b8181c96da8752b2e5e5e5c9117c6947f"} Feb 03 07:20:03 crc kubenswrapper[4872]: I0203 07:20:03.027544 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/crc-debug-vh44m" event={"ID":"59736de2-b29a-4a59-b742-ab8eeb731a70","Type":"ContainerStarted","Data":"51a24e08e1d1ca93cef7f2b05b0909cfa85a3f967195279ef54b9660bd80e0d8"} Feb 03 07:20:03 crc kubenswrapper[4872]: I0203 07:20:03.068335 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-vh44m"] Feb 03 07:20:03 crc kubenswrapper[4872]: I0203 07:20:03.079315 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gmvpd/crc-debug-vh44m"] Feb 03 07:20:04 crc kubenswrapper[4872]: I0203 07:20:04.147668 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:04 crc kubenswrapper[4872]: I0203 07:20:04.195863 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59736de2-b29a-4a59-b742-ab8eeb731a70-host\") pod \"59736de2-b29a-4a59-b742-ab8eeb731a70\" (UID: \"59736de2-b29a-4a59-b742-ab8eeb731a70\") " Feb 03 07:20:04 crc kubenswrapper[4872]: I0203 07:20:04.196013 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59736de2-b29a-4a59-b742-ab8eeb731a70-host" (OuterVolumeSpecName: "host") pod "59736de2-b29a-4a59-b742-ab8eeb731a70" (UID: "59736de2-b29a-4a59-b742-ab8eeb731a70"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 07:20:04 crc kubenswrapper[4872]: I0203 07:20:04.196252 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nd67\" (UniqueName: \"kubernetes.io/projected/59736de2-b29a-4a59-b742-ab8eeb731a70-kube-api-access-7nd67\") pod \"59736de2-b29a-4a59-b742-ab8eeb731a70\" (UID: \"59736de2-b29a-4a59-b742-ab8eeb731a70\") " Feb 03 07:20:04 crc kubenswrapper[4872]: I0203 07:20:04.196847 4872 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59736de2-b29a-4a59-b742-ab8eeb731a70-host\") on node \"crc\" DevicePath \"\"" Feb 03 07:20:04 crc kubenswrapper[4872]: I0203 07:20:04.203546 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59736de2-b29a-4a59-b742-ab8eeb731a70-kube-api-access-7nd67" (OuterVolumeSpecName: "kube-api-access-7nd67") pod "59736de2-b29a-4a59-b742-ab8eeb731a70" (UID: "59736de2-b29a-4a59-b742-ab8eeb731a70"). InnerVolumeSpecName "kube-api-access-7nd67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:20:04 crc kubenswrapper[4872]: I0203 07:20:04.298387 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nd67\" (UniqueName: \"kubernetes.io/projected/59736de2-b29a-4a59-b742-ab8eeb731a70-kube-api-access-7nd67\") on node \"crc\" DevicePath \"\"" Feb 03 07:20:05 crc kubenswrapper[4872]: I0203 07:20:05.051396 4872 scope.go:117] "RemoveContainer" containerID="6aa8802d03698b9a767bdc9b2dda5b4b8181c96da8752b2e5e5e5c9117c6947f" Feb 03 07:20:05 crc kubenswrapper[4872]: I0203 07:20:05.051419 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/crc-debug-vh44m" Feb 03 07:20:06 crc kubenswrapper[4872]: I0203 07:20:06.131953 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59736de2-b29a-4a59-b742-ab8eeb731a70" path="/var/lib/kubelet/pods/59736de2-b29a-4a59-b742-ab8eeb731a70/volumes" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.667426 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nrq4c"] Feb 03 07:20:29 crc kubenswrapper[4872]: E0203 07:20:29.668419 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59736de2-b29a-4a59-b742-ab8eeb731a70" containerName="container-00" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.668436 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="59736de2-b29a-4a59-b742-ab8eeb731a70" containerName="container-00" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.668710 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="59736de2-b29a-4a59-b742-ab8eeb731a70" containerName="container-00" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.670490 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.700877 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrq4c"] Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.740654 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-catalog-content\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.740825 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-utilities\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.740868 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6fxr\" (UniqueName: \"kubernetes.io/projected/9c080248-db47-4b28-b511-d704442a8587-kube-api-access-l6fxr\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.842599 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6fxr\" (UniqueName: \"kubernetes.io/projected/9c080248-db47-4b28-b511-d704442a8587-kube-api-access-l6fxr\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.842659 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-catalog-content\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.842796 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-utilities\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.843197 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-utilities\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.843704 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-catalog-content\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.870704 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6fxr\" (UniqueName: \"kubernetes.io/projected/9c080248-db47-4b28-b511-d704442a8587-kube-api-access-l6fxr\") pod \"redhat-operators-nrq4c\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:29 crc kubenswrapper[4872]: I0203 07:20:29.996139 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:30 crc kubenswrapper[4872]: I0203 07:20:30.540940 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nrq4c"] Feb 03 07:20:31 crc kubenswrapper[4872]: I0203 07:20:31.306100 4872 generic.go:334] "Generic (PLEG): container finished" podID="9c080248-db47-4b28-b511-d704442a8587" containerID="e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656" exitCode=0 Feb 03 07:20:31 crc kubenswrapper[4872]: I0203 07:20:31.306162 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrq4c" event={"ID":"9c080248-db47-4b28-b511-d704442a8587","Type":"ContainerDied","Data":"e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656"} Feb 03 07:20:31 crc kubenswrapper[4872]: I0203 07:20:31.307330 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrq4c" event={"ID":"9c080248-db47-4b28-b511-d704442a8587","Type":"ContainerStarted","Data":"a2e7a5e0105b378324475cf3f91db378f3bd3c8c2fd3adee8d206daa105fd313"} Feb 03 07:20:32 crc kubenswrapper[4872]: I0203 07:20:32.317106 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrq4c" event={"ID":"9c080248-db47-4b28-b511-d704442a8587","Type":"ContainerStarted","Data":"b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29"} Feb 03 07:20:38 crc kubenswrapper[4872]: I0203 07:20:38.364652 4872 generic.go:334] "Generic (PLEG): container finished" podID="9c080248-db47-4b28-b511-d704442a8587" containerID="b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29" exitCode=0 Feb 03 07:20:38 crc kubenswrapper[4872]: I0203 07:20:38.364722 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrq4c" event={"ID":"9c080248-db47-4b28-b511-d704442a8587","Type":"ContainerDied","Data":"b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29"} Feb 03 07:20:39 crc kubenswrapper[4872]: I0203 07:20:39.377834 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrq4c" event={"ID":"9c080248-db47-4b28-b511-d704442a8587","Type":"ContainerStarted","Data":"7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807"} Feb 03 07:20:39 crc kubenswrapper[4872]: I0203 07:20:39.399350 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nrq4c" podStartSLOduration=2.895805733 podStartE2EDuration="10.399329454s" podCreationTimestamp="2026-02-03 07:20:29 +0000 UTC" firstStartedPulling="2026-02-03 07:20:31.308151745 +0000 UTC m=+4801.890843159" lastFinishedPulling="2026-02-03 07:20:38.811675476 +0000 UTC m=+4809.394366880" observedRunningTime="2026-02-03 07:20:39.395125712 +0000 UTC m=+4809.977817156" watchObservedRunningTime="2026-02-03 07:20:39.399329454 +0000 UTC m=+4809.982020878" Feb 03 07:20:39 crc kubenswrapper[4872]: I0203 07:20:39.996421 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:39 crc kubenswrapper[4872]: I0203 07:20:39.996671 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:41 crc kubenswrapper[4872]: I0203 07:20:41.056668 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nrq4c" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="registry-server" probeResult="failure" output=< Feb 03 07:20:41 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:20:41 crc kubenswrapper[4872]: > Feb 03 07:20:50 crc kubenswrapper[4872]: I0203 07:20:50.071111 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:50 crc kubenswrapper[4872]: I0203 07:20:50.146320 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:50 crc kubenswrapper[4872]: I0203 07:20:50.338618 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nrq4c"] Feb 03 07:20:51 crc kubenswrapper[4872]: I0203 07:20:51.506939 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nrq4c" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="registry-server" containerID="cri-o://7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807" gracePeriod=2 Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.208310 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8447df9874-g22nb_f3bfff84-36be-489c-85a2-6e4ebfec4d1a/barbican-api/0.log" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.385449 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.492529 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8447df9874-g22nb_f3bfff84-36be-489c-85a2-6e4ebfec4d1a/barbican-api-log/0.log" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.552986 4872 generic.go:334] "Generic (PLEG): container finished" podID="9c080248-db47-4b28-b511-d704442a8587" containerID="7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807" exitCode=0 Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.553039 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrq4c" event={"ID":"9c080248-db47-4b28-b511-d704442a8587","Type":"ContainerDied","Data":"7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807"} Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.553066 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nrq4c" event={"ID":"9c080248-db47-4b28-b511-d704442a8587","Type":"ContainerDied","Data":"a2e7a5e0105b378324475cf3f91db378f3bd3c8c2fd3adee8d206daa105fd313"} Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.553071 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nrq4c" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.553086 4872 scope.go:117] "RemoveContainer" containerID="7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.566075 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-catalog-content\") pod \"9c080248-db47-4b28-b511-d704442a8587\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.566470 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6fxr\" (UniqueName: \"kubernetes.io/projected/9c080248-db47-4b28-b511-d704442a8587-kube-api-access-l6fxr\") pod \"9c080248-db47-4b28-b511-d704442a8587\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.566572 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-utilities\") pod \"9c080248-db47-4b28-b511-d704442a8587\" (UID: \"9c080248-db47-4b28-b511-d704442a8587\") " Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.570280 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-utilities" (OuterVolumeSpecName: "utilities") pod "9c080248-db47-4b28-b511-d704442a8587" (UID: "9c080248-db47-4b28-b511-d704442a8587"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.573309 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.573436 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c080248-db47-4b28-b511-d704442a8587-kube-api-access-l6fxr" (OuterVolumeSpecName: "kube-api-access-l6fxr") pod "9c080248-db47-4b28-b511-d704442a8587" (UID: "9c080248-db47-4b28-b511-d704442a8587"). InnerVolumeSpecName "kube-api-access-l6fxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.575909 4872 scope.go:117] "RemoveContainer" containerID="b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.593041 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-dd8d7b8db-mtx4r_db9d16c1-a901-456f-a48d-a56879b49c8d/barbican-keystone-listener/0.log" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.651224 4872 scope.go:117] "RemoveContainer" containerID="e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.675614 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6fxr\" (UniqueName: \"kubernetes.io/projected/9c080248-db47-4b28-b511-d704442a8587-kube-api-access-l6fxr\") on node \"crc\" DevicePath \"\"" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.696076 4872 scope.go:117] "RemoveContainer" containerID="7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807" Feb 03 07:20:52 crc kubenswrapper[4872]: E0203 07:20:52.698575 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807\": container with ID starting with 7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807 not found: ID does not exist" containerID="7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.698611 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807"} err="failed to get container status \"7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807\": rpc error: code = NotFound desc = could not find container \"7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807\": container with ID starting with 7e2109e77940ad85df7e2ffb271a0c58cfe93149a78f9bedc790df809e313807 not found: ID does not exist" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.698633 4872 scope.go:117] "RemoveContainer" containerID="b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29" Feb 03 07:20:52 crc kubenswrapper[4872]: E0203 07:20:52.698930 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29\": container with ID starting with b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29 not found: ID does not exist" containerID="b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.698953 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29"} err="failed to get container status \"b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29\": rpc error: code = NotFound desc = could not find container \"b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29\": container with ID starting with b8b913b6e8af57e955dc291d4a6dd58f21e12fb9911367e6f89e93041d3bac29 not found: ID does not exist" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.698966 4872 scope.go:117] "RemoveContainer" containerID="e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656" Feb 03 07:20:52 crc kubenswrapper[4872]: E0203 07:20:52.699193 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656\": container with ID starting with e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656 not found: ID does not exist" containerID="e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.699216 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656"} err="failed to get container status \"e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656\": rpc error: code = NotFound desc = could not find container \"e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656\": container with ID starting with e96e02ca2e04d7a6065751efdf8981768732d036e017beb777a24fb1626b6656 not found: ID does not exist" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.721153 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-dd8d7b8db-mtx4r_db9d16c1-a901-456f-a48d-a56879b49c8d/barbican-keystone-listener-log/0.log" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.740709 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c080248-db47-4b28-b511-d704442a8587" (UID: "9c080248-db47-4b28-b511-d704442a8587"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.777669 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c080248-db47-4b28-b511-d704442a8587-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.800711 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c77557787-rb2tb_8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5/barbican-worker/0.log" Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.885169 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nrq4c"] Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.889905 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nrq4c"] Feb 03 07:20:52 crc kubenswrapper[4872]: I0203 07:20:52.920098 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c77557787-rb2tb_8732beaa-57df-4ebd-8a0e-9e4ee1ef96a5/barbican-worker-log/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.077403 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fh8d8_711888ee-ec08-437f-bf74-54ea092796bf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.190895 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_248c9cda-018d-4cea-8dc8-c6a77788155a/ceilometer-central-agent/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.268269 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_248c9cda-018d-4cea-8dc8-c6a77788155a/ceilometer-notification-agent/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.365077 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_248c9cda-018d-4cea-8dc8-c6a77788155a/sg-core/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.387096 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_248c9cda-018d-4cea-8dc8-c6a77788155a/proxy-httpd/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.522065 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e1031a8c-c3fb-4022-826e-77509f2a2b2f/cinder-api/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.677627 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e1031a8c-c3fb-4022-826e-77509f2a2b2f/cinder-api-log/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.807323 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1e82d1b0-0354-4126-b305-6af3e5fdcb9a/cinder-scheduler/0.log" Feb 03 07:20:53 crc kubenswrapper[4872]: I0203 07:20:53.878638 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1e82d1b0-0354-4126-b305-6af3e5fdcb9a/probe/0.log" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.002994 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8927k_4ff0db4e-d73d-4a67-9b5c-9e4dfa6913a2/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.147586 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c080248-db47-4b28-b511-d704442a8587" path="/var/lib/kubelet/pods/9c080248-db47-4b28-b511-d704442a8587/volumes" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.218175 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2l7ld_34dc0856-28c2-4b86-adb9-0310701b5110/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.273865 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-746ln_c73cade5-ebf2-4b32-9eec-efbc6a089cee/init/0.log" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.463658 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-746ln_c73cade5-ebf2-4b32-9eec-efbc6a089cee/init/0.log" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.653209 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-r8pv7_72bf0048-7229-4354-a6e3-1c508f3bacef/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.803584 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_71811df4-e41d-4e6b-a94c-81e871e39632/glance-httpd/0.log" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.835912 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-746ln_c73cade5-ebf2-4b32-9eec-efbc6a089cee/dnsmasq-dns/0.log" Feb 03 07:20:54 crc kubenswrapper[4872]: I0203 07:20:54.886294 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_71811df4-e41d-4e6b-a94c-81e871e39632/glance-log/0.log" Feb 03 07:20:55 crc kubenswrapper[4872]: I0203 07:20:55.055671 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_beba659d-d168-47b7-a0ee-f467101ed286/glance-httpd/0.log" Feb 03 07:20:55 crc kubenswrapper[4872]: I0203 07:20:55.085632 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_beba659d-d168-47b7-a0ee-f467101ed286/glance-log/0.log" Feb 03 07:20:55 crc kubenswrapper[4872]: I0203 07:20:55.318112 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57dc94599b-bvf7j_f475ab66-31e6-46da-ad2e-8e8279e33b68/horizon/1.log" Feb 03 07:20:55 crc kubenswrapper[4872]: I0203 07:20:55.487348 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57dc94599b-bvf7j_f475ab66-31e6-46da-ad2e-8e8279e33b68/horizon/0.log" Feb 03 07:20:55 crc kubenswrapper[4872]: I0203 07:20:55.624286 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4dfwh_ab6004c7-7d34-42ff-bf95-1358f1abcbf1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:55 crc kubenswrapper[4872]: I0203 07:20:55.853002 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57dc94599b-bvf7j_f475ab66-31e6-46da-ad2e-8e8279e33b68/horizon-log/0.log" Feb 03 07:20:55 crc kubenswrapper[4872]: I0203 07:20:55.907444 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qckpv_c09861b6-6f3c-496c-a46e-eb7667965fc7/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:56 crc kubenswrapper[4872]: I0203 07:20:56.187121 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29501701-6l75v_1214a97d-f94a-40bf-88ea-0310ff11684d/keystone-cron/0.log" Feb 03 07:20:56 crc kubenswrapper[4872]: I0203 07:20:56.425882 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d26e8416-4ddb-40f1-bfa1-482da12274a3/kube-state-metrics/0.log" Feb 03 07:20:56 crc kubenswrapper[4872]: I0203 07:20:56.570641 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6447fd6947-kkqrr_885d40a9-ca6e-4beb-9782-35099d10bf35/keystone-api/0.log" Feb 03 07:20:56 crc kubenswrapper[4872]: I0203 07:20:56.659591 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-rlqvs_3d7243a2-dd2b-4462-8313-92e68450f743/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:57 crc kubenswrapper[4872]: I0203 07:20:57.359531 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ecd2a199-4a3b-4e36-8430-5301d68c1595/memcached/0.log" Feb 03 07:20:57 crc kubenswrapper[4872]: I0203 07:20:57.366196 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bnc6l_c7dd671e-752b-42ed-826a-be9e1bbb8d66/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:57 crc kubenswrapper[4872]: I0203 07:20:57.382986 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84b5664f65-lkwpl_23d834e1-1b59-4463-b893-fd23fa1e7ecd/neutron-httpd/0.log" Feb 03 07:20:57 crc kubenswrapper[4872]: I0203 07:20:57.511423 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84b5664f65-lkwpl_23d834e1-1b59-4463-b893-fd23fa1e7ecd/neutron-api/0.log" Feb 03 07:20:58 crc kubenswrapper[4872]: I0203 07:20:58.513347 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_898b1712-c38c-4438-9fd6-bc94e59b459e/nova-cell0-conductor-conductor/0.log" Feb 03 07:20:58 crc kubenswrapper[4872]: I0203 07:20:58.698096 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d239893e-43bd-4f8f-b03e-6451a16a0865/nova-cell1-conductor-conductor/0.log" Feb 03 07:20:59 crc kubenswrapper[4872]: I0203 07:20:59.012773 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_756dc212-f1ae-44f8-bcb1-d5c4180da686/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 07:20:59 crc kubenswrapper[4872]: I0203 07:20:59.036020 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_baca7029-2f99-49c6-810f-7a25a2a853d0/nova-api-log/0.log" Feb 03 07:20:59 crc kubenswrapper[4872]: I0203 07:20:59.138743 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-m5jcw_74a3f126-057b-4f44-9483-82e6a6a00c90/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:20:59 crc kubenswrapper[4872]: I0203 07:20:59.281213 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_baca7029-2f99-49c6-810f-7a25a2a853d0/nova-api-api/0.log" Feb 03 07:20:59 crc kubenswrapper[4872]: I0203 07:20:59.326882 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_96d27a75-4427-4ff9-82ad-4672a9d403da/nova-metadata-log/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.034865 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a46bed9-4154-4a62-8805-fe67c55a2d89/mysql-bootstrap/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.243009 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a46bed9-4154-4a62-8805-fe67c55a2d89/mysql-bootstrap/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.312194 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a46bed9-4154-4a62-8805-fe67c55a2d89/galera/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.368698 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e5a4b0fb-cc06-47b7-b789-9d321718a06c/nova-scheduler-scheduler/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.527317 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e939fc-8c23-4843-a7ec-4cbd82d8cff7/mysql-bootstrap/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.649336 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_96d27a75-4427-4ff9-82ad-4672a9d403da/nova-metadata-metadata/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.690643 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e939fc-8c23-4843-a7ec-4cbd82d8cff7/mysql-bootstrap/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.794025 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_57e939fc-8c23-4843-a7ec-4cbd82d8cff7/galera/0.log" Feb 03 07:21:00 crc kubenswrapper[4872]: I0203 07:21:00.833721 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7cf9b6dc-b4a3-4c6b-b0a2-c78dbb24aa67/openstackclient/0.log" Feb 03 07:21:01 crc kubenswrapper[4872]: I0203 07:21:01.460634 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xsclk_e868189d-0fbd-45bc-83cb-9b71f951c53f/openstack-network-exporter/0.log" Feb 03 07:21:01 crc kubenswrapper[4872]: I0203 07:21:01.556521 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zbxs4_bfcd6876-7bc4-40d4-94af-6a5c175e7bb0/ovsdb-server-init/0.log" Feb 03 07:21:01 crc kubenswrapper[4872]: I0203 07:21:01.784860 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zbxs4_bfcd6876-7bc4-40d4-94af-6a5c175e7bb0/ovsdb-server-init/0.log" Feb 03 07:21:01 crc kubenswrapper[4872]: I0203 07:21:01.823341 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zbxs4_bfcd6876-7bc4-40d4-94af-6a5c175e7bb0/ovs-vswitchd/0.log" Feb 03 07:21:01 crc kubenswrapper[4872]: I0203 07:21:01.834595 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wjbc7_19908dab-b232-4cd8-b45b-079cebdee593/ovn-controller/0.log" Feb 03 07:21:01 crc kubenswrapper[4872]: I0203 07:21:01.840883 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zbxs4_bfcd6876-7bc4-40d4-94af-6a5c175e7bb0/ovsdb-server/0.log" Feb 03 07:21:02 crc kubenswrapper[4872]: I0203 07:21:02.111011 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9vpmm_604ba5bf-8ae1-4540-9ec8-366de98da8ba/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:21:02 crc kubenswrapper[4872]: I0203 07:21:02.140025 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e1641148-8016-42db-879c-29e9e04666f3/openstack-network-exporter/0.log" Feb 03 07:21:02 crc kubenswrapper[4872]: I0203 07:21:02.166981 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e1641148-8016-42db-879c-29e9e04666f3/ovn-northd/0.log" Feb 03 07:21:02 crc kubenswrapper[4872]: I0203 07:21:02.294963 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_82f89f1b-12ce-4720-9af7-3d8acb128b65/openstack-network-exporter/0.log" Feb 03 07:21:02 crc kubenswrapper[4872]: I0203 07:21:02.379767 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_82f89f1b-12ce-4720-9af7-3d8acb128b65/ovsdbserver-nb/0.log" Feb 03 07:21:02 crc kubenswrapper[4872]: I0203 07:21:02.416457 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f323a5b2-6517-4f06-baec-308207807af3/openstack-network-exporter/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.111352 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f7ccbfc56-8bmzq_90a09160-e06d-497b-bf29-781a4009c899/placement-api/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.124050 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f323a5b2-6517-4f06-baec-308207807af3/ovsdbserver-sb/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.342757 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f7ccbfc56-8bmzq_90a09160-e06d-497b-bf29-781a4009c899/placement-log/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.343942 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0d044870-6de9-4816-b9e2-249371dc40e6/setup-container/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.566299 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0d044870-6de9-4816-b9e2-249371dc40e6/rabbitmq/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.622604 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0d044870-6de9-4816-b9e2-249371dc40e6/setup-container/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.673789 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ee583ece-3623-4df4-b879-4ab45489bb07/setup-container/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.821845 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ee583ece-3623-4df4-b879-4ab45489bb07/rabbitmq/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.864141 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ee583ece-3623-4df4-b879-4ab45489bb07/setup-container/0.log" Feb 03 07:21:03 crc kubenswrapper[4872]: I0203 07:21:03.935170 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5hx65_e99e0ea4-b0dd-482a-986f-80eed7253030/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.084744 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t8f24_57a842b6-0ca1-471d-aae3-cb4fa4545417/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.231304 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bxg56_861d7992-b778-4bc8-9708-5f94d519db54/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.326929 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-nj6c7_b78a618f-d5ee-4722-8d29-b142f05127bf/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.429295 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fvbc6_60b9b2c5-7f79-49d1-b215-4de0664b44c0/ssh-known-hosts-edpm-deployment/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.629752 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f5458fb75-k8gpr_7869dbc8-d72a-47cf-8547-40b91024653f/proxy-server/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.646700 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f5458fb75-k8gpr_7869dbc8-d72a-47cf-8547-40b91024653f/proxy-httpd/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.757766 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-m4mq8_27de9be5-8c0c-4283-81ab-6ec3706d94c7/swift-ring-rebalance/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.850430 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/account-auditor/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.885352 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/account-reaper/0.log" Feb 03 07:21:04 crc kubenswrapper[4872]: I0203 07:21:04.992480 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/account-server/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.011422 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/account-replicator/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.055253 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/container-auditor/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.148025 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/container-server/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.193463 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/container-replicator/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.247160 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-auditor/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.298374 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/container-updater/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.308898 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-expirer/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.443981 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-replicator/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.477295 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-server/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.522204 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/object-updater/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.565723 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/swift-recon-cron/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.598373 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_53916dd7-8beb-48bb-8689-5693b2b3cf6f/rsync/0.log" Feb 03 07:21:05 crc kubenswrapper[4872]: I0203 07:21:05.809488 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8fhgl_a56903a8-61f2-433b-9ab7-2f96b9f8d15f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:21:06 crc kubenswrapper[4872]: I0203 07:21:06.040815 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ab488c2c-7a02-4e73-8aaa-5e0197d51631/tempest-tests-tempest-tests-runner/0.log" Feb 03 07:21:06 crc kubenswrapper[4872]: I0203 07:21:06.323794 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_105cad3e-c6c1-4dfa-93dd-9138d760b916/test-operator-logs-container/0.log" Feb 03 07:21:06 crc kubenswrapper[4872]: I0203 07:21:06.396209 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-fjglw_13b6c575-0d6a-4cf8-867d-3230bdded4e4/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 07:21:31 crc kubenswrapper[4872]: I0203 07:21:31.272063 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:21:31 crc kubenswrapper[4872]: I0203 07:21:31.272610 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:21:35 crc kubenswrapper[4872]: I0203 07:21:35.291736 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/util/0.log" Feb 03 07:21:35 crc kubenswrapper[4872]: I0203 07:21:35.488959 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/util/0.log" Feb 03 07:21:35 crc kubenswrapper[4872]: I0203 07:21:35.528589 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/pull/0.log" Feb 03 07:21:35 crc kubenswrapper[4872]: I0203 07:21:35.538304 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/pull/0.log" Feb 03 07:21:35 crc kubenswrapper[4872]: I0203 07:21:35.754717 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/pull/0.log" Feb 03 07:21:35 crc kubenswrapper[4872]: I0203 07:21:35.816800 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/extract/0.log" Feb 03 07:21:35 crc kubenswrapper[4872]: I0203 07:21:35.831418 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_198fefeaf9ba73d70a516f9552fef45f45d71dbe2e17427e2d24935a282bjrj_870be28a-b443-4523-b4a0-c0d773eedaff/util/0.log" Feb 03 07:21:36 crc kubenswrapper[4872]: I0203 07:21:36.022762 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-zznkj_876b6e4d-32cd-47e3-b748-f9c8ea1d84cf/manager/0.log" Feb 03 07:21:36 crc kubenswrapper[4872]: I0203 07:21:36.079286 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-bw7h4_febe7a4c-e275-4af0-b895-8701c164271c/manager/0.log" Feb 03 07:21:36 crc kubenswrapper[4872]: I0203 07:21:36.248113 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-5chl5_7102e0e7-3daa-4610-b931-ca17c7f08461/manager/0.log" Feb 03 07:21:36 crc kubenswrapper[4872]: I0203 07:21:36.380935 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-x2779_29e7b8a5-19cf-46ea-a135-019d30af35b3/manager/0.log" Feb 03 07:21:36 crc kubenswrapper[4872]: I0203 07:21:36.468714 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-dpd6g_394038df-4d8a-41cc-bb90-02dec7dd1fb3/manager/0.log" Feb 03 07:21:37 crc kubenswrapper[4872]: I0203 07:21:37.365852 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-n2mcv_8fc2acde-dcbe-4d32-ad0e-cd4627c2152b/manager/0.log" Feb 03 07:21:37 crc kubenswrapper[4872]: I0203 07:21:37.597498 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-9qph7_cd3e162d-6733-47c4-b507-c08c577723d0/manager/0.log" Feb 03 07:21:37 crc kubenswrapper[4872]: I0203 07:21:37.613712 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-lnvft_7319691f-007c-45cd-bd1b-11055339e2ab/manager/0.log" Feb 03 07:21:38 crc kubenswrapper[4872]: I0203 07:21:38.010360 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-9x5w8_71308b40-7203-4586-9a21-9b4621a9aaf7/manager/0.log" Feb 03 07:21:38 crc kubenswrapper[4872]: I0203 07:21:38.027413 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-dvqpz_c3aba523-0e11-4e5d-9adf-be5978a1f4e1/manager/0.log" Feb 03 07:21:38 crc kubenswrapper[4872]: I0203 07:21:38.349103 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-ww2cx_cfe508f3-98be-48d5-bf5b-3cb24a9ba131/manager/0.log" Feb 03 07:21:38 crc kubenswrapper[4872]: I0203 07:21:38.360484 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-czpn5_71efcd75-c242-4036-b2e0-fdb117880dd9/manager/0.log" Feb 03 07:21:39 crc kubenswrapper[4872]: I0203 07:21:39.076303 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-rjz4z_026cffca-2976-4ba1-8bb6-3e86c4521166/manager/0.log" Feb 03 07:21:39 crc kubenswrapper[4872]: I0203 07:21:39.118300 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-p475q_e7d3c449-bd59-48b2-9047-2c7589cdf51a/manager/0.log" Feb 03 07:21:39 crc kubenswrapper[4872]: I0203 07:21:39.324220 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dlmgm4_08ed93ce-02ea-45af-b481-69ed92f5aff5/manager/0.log" Feb 03 07:21:39 crc kubenswrapper[4872]: I0203 07:21:39.465859 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-67c68487b9-mxm2b_69023f68-79e2-4de9-a210-32ecce7b635b/operator/0.log" Feb 03 07:21:39 crc kubenswrapper[4872]: I0203 07:21:39.663525 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5nlb5_89cb3a32-685e-40fa-9370-374e91db24dd/registry-server/0.log" Feb 03 07:21:39 crc kubenswrapper[4872]: I0203 07:21:39.851846 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-lkxq4_584404c0-4ffd-43f5-a06f-009650dc0cc9/manager/0.log" Feb 03 07:21:40 crc kubenswrapper[4872]: I0203 07:21:40.000755 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-vl4tx_9bb5cb68-4c55-4c47-beb8-a9caa56db1b3/manager/0.log" Feb 03 07:21:40 crc kubenswrapper[4872]: I0203 07:21:40.129255 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mvdpp_71ed58d0-78f2-497b-8802-3647a361c99b/operator/0.log" Feb 03 07:21:40 crc kubenswrapper[4872]: I0203 07:21:40.347929 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-w7ncd_d9a67e95-335b-40cf-af71-4f3fd69a1fd9/manager/0.log" Feb 03 07:21:40 crc kubenswrapper[4872]: I0203 07:21:40.481265 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-29rp5_d843f756-0ec0-4a79-b34d-14e257e22102/manager/0.log" Feb 03 07:21:40 crc kubenswrapper[4872]: I0203 07:21:40.596104 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5bc755b6c5-ptv7t_9ee72576-2dc3-4b0b-ba3d-38aa27fba615/manager/0.log" Feb 03 07:21:40 crc kubenswrapper[4872]: I0203 07:21:40.664070 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-rlwfv_e0b27752-d9b8-4bd6-92c4-253508657db5/manager/0.log" Feb 03 07:21:40 crc kubenswrapper[4872]: I0203 07:21:40.727704 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-t4l5f_39f4fe96-ca54-4135-9eb3-e40a187e54a4/manager/0.log" Feb 03 07:22:01 crc kubenswrapper[4872]: I0203 07:22:01.271801 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:22:01 crc kubenswrapper[4872]: I0203 07:22:01.272310 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:22:06 crc kubenswrapper[4872]: I0203 07:22:06.676069 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qcvzs_a611c711-1f25-4e6e-983c-17c001aaeabd/kube-rbac-proxy/0.log" Feb 03 07:22:06 crc kubenswrapper[4872]: I0203 07:22:06.684762 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zvqv2_3c1bfc9a-db7c-49a5-acd6-05ad2a616cae/control-plane-machine-set-operator/0.log" Feb 03 07:22:06 crc kubenswrapper[4872]: I0203 07:22:06.846299 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qcvzs_a611c711-1f25-4e6e-983c-17c001aaeabd/machine-api-operator/0.log" Feb 03 07:22:21 crc kubenswrapper[4872]: I0203 07:22:21.323382 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vnbj8_d2894283-ae8b-4bb5-a0d0-825d14b8a2bc/cert-manager-controller/0.log" Feb 03 07:22:21 crc kubenswrapper[4872]: I0203 07:22:21.527248 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-6kgzc_93180a09-56e7-468c-9181-e88473627564/cert-manager-cainjector/0.log" Feb 03 07:22:21 crc kubenswrapper[4872]: I0203 07:22:21.673309 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-x86ch_958bf8fc-d47f-45ff-b237-64ea37f16e2d/cert-manager-webhook/0.log" Feb 03 07:22:31 crc kubenswrapper[4872]: I0203 07:22:31.271644 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:22:31 crc kubenswrapper[4872]: I0203 07:22:31.272044 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:22:31 crc kubenswrapper[4872]: I0203 07:22:31.272083 4872 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" Feb 03 07:22:31 crc kubenswrapper[4872]: I0203 07:22:31.272772 4872 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16"} pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 07:22:31 crc kubenswrapper[4872]: I0203 07:22:31.272819 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" containerID="cri-o://00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" gracePeriod=600 Feb 03 07:22:31 crc kubenswrapper[4872]: E0203 07:22:31.898000 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:22:32 crc kubenswrapper[4872]: I0203 07:22:32.212550 4872 generic.go:334] "Generic (PLEG): container finished" podID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" exitCode=0 Feb 03 07:22:32 crc kubenswrapper[4872]: I0203 07:22:32.212649 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerDied","Data":"00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16"} Feb 03 07:22:32 crc kubenswrapper[4872]: I0203 07:22:32.213118 4872 scope.go:117] "RemoveContainer" containerID="dfce94cdfd61e5881eaf79368529a71f1fe987514ba0a8af83843c863c3af6c0" Feb 03 07:22:32 crc kubenswrapper[4872]: I0203 07:22:32.214503 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:22:32 crc kubenswrapper[4872]: E0203 07:22:32.215303 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:22:37 crc kubenswrapper[4872]: I0203 07:22:37.591327 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-pnq7p_724ba92c-602b-4031-8ad1-7e5b084c4386/nmstate-console-plugin/0.log" Feb 03 07:22:37 crc kubenswrapper[4872]: I0203 07:22:37.824991 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4rnzm_d985e2b5-be7b-4e11-835f-0fbb14859743/nmstate-handler/0.log" Feb 03 07:22:37 crc kubenswrapper[4872]: I0203 07:22:37.847973 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-22pzf_eecf7d0c-c77c-4bb5-9588-24b324a7848f/kube-rbac-proxy/0.log" Feb 03 07:22:37 crc kubenswrapper[4872]: I0203 07:22:37.963215 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-22pzf_eecf7d0c-c77c-4bb5-9588-24b324a7848f/nmstate-metrics/0.log" Feb 03 07:22:38 crc kubenswrapper[4872]: I0203 07:22:38.174406 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-jnk4k_0a2cf0fd-a05d-4b50-a0fa-727e373679c2/nmstate-operator/0.log" Feb 03 07:22:38 crc kubenswrapper[4872]: I0203 07:22:38.249648 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-l6d5d_79f465a2-4fb7-470a-83ba-ed5d98e5227b/nmstate-webhook/0.log" Feb 03 07:22:44 crc kubenswrapper[4872]: I0203 07:22:44.122472 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:22:44 crc kubenswrapper[4872]: E0203 07:22:44.123282 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:22:58 crc kubenswrapper[4872]: I0203 07:22:58.125313 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:22:58 crc kubenswrapper[4872]: E0203 07:22:58.126662 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:23:09 crc kubenswrapper[4872]: I0203 07:23:09.123790 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:23:09 crc kubenswrapper[4872]: E0203 07:23:09.124356 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:23:10 crc kubenswrapper[4872]: I0203 07:23:10.927177 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tb2wg_fb6f2971-eff5-4e61-8584-073de69e2e5f/kube-rbac-proxy/0.log" Feb 03 07:23:10 crc kubenswrapper[4872]: I0203 07:23:10.960839 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-tb2wg_fb6f2971-eff5-4e61-8584-073de69e2e5f/controller/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.249882 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-frr-files/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.412207 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-frr-files/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.425544 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-metrics/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.499139 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-reloader/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.533669 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-reloader/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.693397 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-frr-files/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.770280 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-metrics/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.775059 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-reloader/0.log" Feb 03 07:23:11 crc kubenswrapper[4872]: I0203 07:23:11.833248 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-metrics/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.001610 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-reloader/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.047959 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-frr-files/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.070954 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/cp-metrics/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.086268 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/controller/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.320088 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/frr-metrics/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.354121 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/kube-rbac-proxy-frr/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.396731 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/kube-rbac-proxy/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.541119 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/reloader/0.log" Feb 03 07:23:12 crc kubenswrapper[4872]: I0203 07:23:12.661262 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-grdt7_69417317-0a8d-4c10-8f4c-fe8e387b678e/frr-k8s-webhook-server/0.log" Feb 03 07:23:13 crc kubenswrapper[4872]: I0203 07:23:13.022633 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-fdc65c4dc-rczwz_80b94f6b-5ca4-4650-b58f-df22137e4c04/manager/0.log" Feb 03 07:23:13 crc kubenswrapper[4872]: I0203 07:23:13.162434 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-74df9ff78b-5dz89_c81e20ac-e8e8-4f44-ba9d-f52c5c30849b/webhook-server/0.log" Feb 03 07:23:13 crc kubenswrapper[4872]: I0203 07:23:13.391056 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4qtc9_21971f1c-c210-4df4-942c-4637ecdbcd75/kube-rbac-proxy/0.log" Feb 03 07:23:13 crc kubenswrapper[4872]: I0203 07:23:13.529368 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qrbzc_7856ee3e-22ea-4e77-b4aa-69893fc7e281/frr/0.log" Feb 03 07:23:13 crc kubenswrapper[4872]: I0203 07:23:13.983704 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4qtc9_21971f1c-c210-4df4-942c-4637ecdbcd75/speaker/0.log" Feb 03 07:23:23 crc kubenswrapper[4872]: I0203 07:23:23.123247 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:23:23 crc kubenswrapper[4872]: E0203 07:23:23.124272 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:23:29 crc kubenswrapper[4872]: I0203 07:23:29.420499 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/util/0.log" Feb 03 07:23:29 crc kubenswrapper[4872]: I0203 07:23:29.706515 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/pull/0.log" Feb 03 07:23:29 crc kubenswrapper[4872]: I0203 07:23:29.754577 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/util/0.log" Feb 03 07:23:29 crc kubenswrapper[4872]: I0203 07:23:29.769045 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/pull/0.log" Feb 03 07:23:29 crc kubenswrapper[4872]: I0203 07:23:29.987272 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/extract/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.023147 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/pull/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.049888 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2bjll_9a5f0a26-8be9-4f0b-a612-5416c27be8d0/util/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.215470 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/util/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.382968 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/util/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.415789 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/pull/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.419078 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/pull/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.747565 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/util/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.768061 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/pull/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.796306 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tvx2s_f2a64c03-cefd-4a9e-9f86-fd50865536d3/extract/0.log" Feb 03 07:23:30 crc kubenswrapper[4872]: I0203 07:23:30.954916 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-utilities/0.log" Feb 03 07:23:31 crc kubenswrapper[4872]: I0203 07:23:31.240404 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-utilities/0.log" Feb 03 07:23:31 crc kubenswrapper[4872]: I0203 07:23:31.251210 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-content/0.log" Feb 03 07:23:31 crc kubenswrapper[4872]: I0203 07:23:31.279130 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-content/0.log" Feb 03 07:23:31 crc kubenswrapper[4872]: I0203 07:23:31.499064 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-utilities/0.log" Feb 03 07:23:31 crc kubenswrapper[4872]: I0203 07:23:31.629528 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/extract-content/0.log" Feb 03 07:23:31 crc kubenswrapper[4872]: I0203 07:23:31.814219 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-utilities/0.log" Feb 03 07:23:32 crc kubenswrapper[4872]: I0203 07:23:32.090932 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ttvm7_088135ef-2437-4cab-b009-302268e318d5/registry-server/0.log" Feb 03 07:23:32 crc kubenswrapper[4872]: I0203 07:23:32.104707 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-utilities/0.log" Feb 03 07:23:32 crc kubenswrapper[4872]: I0203 07:23:32.163489 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-content/0.log" Feb 03 07:23:32 crc kubenswrapper[4872]: I0203 07:23:32.166678 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-content/0.log" Feb 03 07:23:32 crc kubenswrapper[4872]: I0203 07:23:32.528793 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-utilities/0.log" Feb 03 07:23:32 crc kubenswrapper[4872]: I0203 07:23:32.623536 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/extract-content/0.log" Feb 03 07:23:32 crc kubenswrapper[4872]: I0203 07:23:32.845858 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7sj6w_c4bf20b9-bd1d-4d8b-8547-500924c14af5/marketplace-operator/0.log" Feb 03 07:23:32 crc kubenswrapper[4872]: I0203 07:23:32.978096 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-utilities/0.log" Feb 03 07:23:33 crc kubenswrapper[4872]: I0203 07:23:33.158637 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-utilities/0.log" Feb 03 07:23:33 crc kubenswrapper[4872]: I0203 07:23:33.271480 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwrls_d1202db8-36c7-47b9-b951-be67cc9c1c44/registry-server/0.log" Feb 03 07:23:33 crc kubenswrapper[4872]: I0203 07:23:33.301552 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-content/0.log" Feb 03 07:23:33 crc kubenswrapper[4872]: I0203 07:23:33.337562 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-content/0.log" Feb 03 07:23:33 crc kubenswrapper[4872]: I0203 07:23:33.768572 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-utilities/0.log" Feb 03 07:23:33 crc kubenswrapper[4872]: I0203 07:23:33.885930 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/extract-content/0.log" Feb 03 07:23:34 crc kubenswrapper[4872]: I0203 07:23:34.069225 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bmlvt_404e90f0-f0f9-41d9-ac4d-9aeb63770d50/registry-server/0.log" Feb 03 07:23:34 crc kubenswrapper[4872]: I0203 07:23:34.139819 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-utilities/0.log" Feb 03 07:23:34 crc kubenswrapper[4872]: I0203 07:23:34.665329 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-content/0.log" Feb 03 07:23:34 crc kubenswrapper[4872]: I0203 07:23:34.699109 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-utilities/0.log" Feb 03 07:23:34 crc kubenswrapper[4872]: I0203 07:23:34.704064 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-content/0.log" Feb 03 07:23:34 crc kubenswrapper[4872]: I0203 07:23:34.900908 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-utilities/0.log" Feb 03 07:23:34 crc kubenswrapper[4872]: I0203 07:23:34.944018 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/extract-content/0.log" Feb 03 07:23:35 crc kubenswrapper[4872]: I0203 07:23:35.122764 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:23:35 crc kubenswrapper[4872]: E0203 07:23:35.123214 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:23:35 crc kubenswrapper[4872]: I0203 07:23:35.516911 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-prm6b_45ac7feb-4f3a-459f-ab99-1409ff1e62bb/registry-server/0.log" Feb 03 07:23:50 crc kubenswrapper[4872]: I0203 07:23:50.130661 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:23:50 crc kubenswrapper[4872]: E0203 07:23:50.131703 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:24:04 crc kubenswrapper[4872]: I0203 07:24:04.122779 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:24:04 crc kubenswrapper[4872]: E0203 07:24:04.123710 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:24:17 crc kubenswrapper[4872]: I0203 07:24:17.123347 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:24:17 crc kubenswrapper[4872]: E0203 07:24:17.126028 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:24:31 crc kubenswrapper[4872]: I0203 07:24:31.122804 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:24:31 crc kubenswrapper[4872]: E0203 07:24:31.125111 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:24:44 crc kubenswrapper[4872]: I0203 07:24:44.123406 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:24:44 crc kubenswrapper[4872]: E0203 07:24:44.125919 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:24:56 crc kubenswrapper[4872]: I0203 07:24:56.123174 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:24:56 crc kubenswrapper[4872]: E0203 07:24:56.123893 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:25:07 crc kubenswrapper[4872]: I0203 07:25:07.125655 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:25:07 crc kubenswrapper[4872]: E0203 07:25:07.126559 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:25:22 crc kubenswrapper[4872]: I0203 07:25:22.126838 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:25:22 crc kubenswrapper[4872]: E0203 07:25:22.127481 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:25:35 crc kubenswrapper[4872]: I0203 07:25:35.122934 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:25:35 crc kubenswrapper[4872]: E0203 07:25:35.123812 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:25:47 crc kubenswrapper[4872]: I0203 07:25:47.123328 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:25:47 crc kubenswrapper[4872]: E0203 07:25:47.123932 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:25:59 crc kubenswrapper[4872]: I0203 07:25:59.919307 4872 generic.go:334] "Generic (PLEG): container finished" podID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerID="cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f" exitCode=0 Feb 03 07:25:59 crc kubenswrapper[4872]: I0203 07:25:59.919388 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gmvpd/must-gather-w2dql" event={"ID":"2c592100-3d66-4d73-9d4e-a18520973fb9","Type":"ContainerDied","Data":"cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f"} Feb 03 07:25:59 crc kubenswrapper[4872]: I0203 07:25:59.920395 4872 scope.go:117] "RemoveContainer" containerID="cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f" Feb 03 07:26:00 crc kubenswrapper[4872]: I0203 07:26:00.114125 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gmvpd_must-gather-w2dql_2c592100-3d66-4d73-9d4e-a18520973fb9/gather/0.log" Feb 03 07:26:00 crc kubenswrapper[4872]: I0203 07:26:00.130556 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:26:00 crc kubenswrapper[4872]: E0203 07:26:00.131000 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:26:01 crc kubenswrapper[4872]: I0203 07:26:01.505185 4872 scope.go:117] "RemoveContainer" containerID="264e0494deb4e4ec433de9f5c08d19b6d9af5e0bf2f79b2f84bd80c37e74b773" Feb 03 07:26:01 crc kubenswrapper[4872]: I0203 07:26:01.531889 4872 scope.go:117] "RemoveContainer" containerID="9b9c0bc64419e1d7050441e94f899ef4d0701a1f3ce58aec70cdad3d25a26a90" Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.022653 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gmvpd/must-gather-w2dql"] Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.024426 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gmvpd/must-gather-w2dql" podUID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerName="copy" containerID="cri-o://34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c" gracePeriod=2 Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.049252 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gmvpd/must-gather-w2dql"] Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.123912 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:26:14 crc kubenswrapper[4872]: E0203 07:26:14.124528 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.847308 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gmvpd_must-gather-w2dql_2c592100-3d66-4d73-9d4e-a18520973fb9/copy/0.log" Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.847891 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.957850 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c592100-3d66-4d73-9d4e-a18520973fb9-must-gather-output\") pod \"2c592100-3d66-4d73-9d4e-a18520973fb9\" (UID: \"2c592100-3d66-4d73-9d4e-a18520973fb9\") " Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.958046 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n82f\" (UniqueName: \"kubernetes.io/projected/2c592100-3d66-4d73-9d4e-a18520973fb9-kube-api-access-8n82f\") pod \"2c592100-3d66-4d73-9d4e-a18520973fb9\" (UID: \"2c592100-3d66-4d73-9d4e-a18520973fb9\") " Feb 03 07:26:14 crc kubenswrapper[4872]: I0203 07:26:14.963522 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c592100-3d66-4d73-9d4e-a18520973fb9-kube-api-access-8n82f" (OuterVolumeSpecName: "kube-api-access-8n82f") pod "2c592100-3d66-4d73-9d4e-a18520973fb9" (UID: "2c592100-3d66-4d73-9d4e-a18520973fb9"). InnerVolumeSpecName "kube-api-access-8n82f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.058109 4872 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gmvpd_must-gather-w2dql_2c592100-3d66-4d73-9d4e-a18520973fb9/copy/0.log" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.058529 4872 generic.go:334] "Generic (PLEG): container finished" podID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerID="34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c" exitCode=143 Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.058578 4872 scope.go:117] "RemoveContainer" containerID="34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.058581 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gmvpd/must-gather-w2dql" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.060095 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n82f\" (UniqueName: \"kubernetes.io/projected/2c592100-3d66-4d73-9d4e-a18520973fb9-kube-api-access-8n82f\") on node \"crc\" DevicePath \"\"" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.075088 4872 scope.go:117] "RemoveContainer" containerID="cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.101374 4872 scope.go:117] "RemoveContainer" containerID="34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c" Feb 03 07:26:15 crc kubenswrapper[4872]: E0203 07:26:15.101936 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c\": container with ID starting with 34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c not found: ID does not exist" containerID="34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.101989 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c"} err="failed to get container status \"34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c\": rpc error: code = NotFound desc = could not find container \"34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c\": container with ID starting with 34c5828f6e76452f7fadb6379681777668f8acada750508cdd8eeda81438e47c not found: ID does not exist" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.102020 4872 scope.go:117] "RemoveContainer" containerID="cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f" Feb 03 07:26:15 crc kubenswrapper[4872]: E0203 07:26:15.102959 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f\": container with ID starting with cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f not found: ID does not exist" containerID="cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.103008 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f"} err="failed to get container status \"cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f\": rpc error: code = NotFound desc = could not find container \"cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f\": container with ID starting with cc6a1072fb3bda776ecb0700a20bceac90ebe260f73f9254ac7eb79371502b3f not found: ID does not exist" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.147602 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c592100-3d66-4d73-9d4e-a18520973fb9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2c592100-3d66-4d73-9d4e-a18520973fb9" (UID: "2c592100-3d66-4d73-9d4e-a18520973fb9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:26:15 crc kubenswrapper[4872]: I0203 07:26:15.162621 4872 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c592100-3d66-4d73-9d4e-a18520973fb9-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 07:26:16 crc kubenswrapper[4872]: I0203 07:26:16.133901 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c592100-3d66-4d73-9d4e-a18520973fb9" path="/var/lib/kubelet/pods/2c592100-3d66-4d73-9d4e-a18520973fb9/volumes" Feb 03 07:26:29 crc kubenswrapper[4872]: I0203 07:26:29.123754 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:26:29 crc kubenswrapper[4872]: E0203 07:26:29.124855 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:26:43 crc kubenswrapper[4872]: I0203 07:26:43.123365 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:26:43 crc kubenswrapper[4872]: E0203 07:26:43.124295 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:26:57 crc kubenswrapper[4872]: I0203 07:26:57.123764 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:26:57 crc kubenswrapper[4872]: E0203 07:26:57.124880 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:27:09 crc kubenswrapper[4872]: I0203 07:27:09.123863 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:27:09 crc kubenswrapper[4872]: E0203 07:27:09.124541 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:27:20 crc kubenswrapper[4872]: I0203 07:27:20.129535 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:27:20 crc kubenswrapper[4872]: E0203 07:27:20.130555 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:27:31 crc kubenswrapper[4872]: I0203 07:27:31.123607 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:27:31 crc kubenswrapper[4872]: E0203 07:27:31.124576 4872 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hgxbn_openshift-machine-config-operator(05d05db4-da7f-4f2f-9025-672aefab2d16)\"" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" Feb 03 07:27:46 crc kubenswrapper[4872]: I0203 07:27:46.122473 4872 scope.go:117] "RemoveContainer" containerID="00eca218efd6d6287ba178ca7acf32b31c7c0ebd118ca628b3e7968ae7960f16" Feb 03 07:27:46 crc kubenswrapper[4872]: I0203 07:27:46.930876 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" event={"ID":"05d05db4-da7f-4f2f-9025-672aefab2d16","Type":"ContainerStarted","Data":"d9b7bc31bb90bb506e5cf903a700e14033a9463e154445d38ec849d1e522be5c"} Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.304472 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qkpg5"] Feb 03 07:28:52 crc kubenswrapper[4872]: E0203 07:28:52.305599 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerName="gather" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.305614 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerName="gather" Feb 03 07:28:52 crc kubenswrapper[4872]: E0203 07:28:52.305627 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerName="copy" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.305632 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerName="copy" Feb 03 07:28:52 crc kubenswrapper[4872]: E0203 07:28:52.305647 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="extract-content" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.305653 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="extract-content" Feb 03 07:28:52 crc kubenswrapper[4872]: E0203 07:28:52.305665 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="extract-utilities" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.305672 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="extract-utilities" Feb 03 07:28:52 crc kubenswrapper[4872]: E0203 07:28:52.305699 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="registry-server" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.305705 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="registry-server" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.305875 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c080248-db47-4b28-b511-d704442a8587" containerName="registry-server" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.305896 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerName="gather" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.305926 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c592100-3d66-4d73-9d4e-a18520973fb9" containerName="copy" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.307174 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.322599 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qkpg5"] Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.427180 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-utilities\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.427237 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndsk2\" (UniqueName: \"kubernetes.io/projected/87b10977-bdc4-49d5-a277-c33fcbf983cd-kube-api-access-ndsk2\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.427298 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-catalog-content\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.528969 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-catalog-content\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.529118 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-utilities\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.529145 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndsk2\" (UniqueName: \"kubernetes.io/projected/87b10977-bdc4-49d5-a277-c33fcbf983cd-kube-api-access-ndsk2\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.529611 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-catalog-content\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.529769 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-utilities\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.565402 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndsk2\" (UniqueName: \"kubernetes.io/projected/87b10977-bdc4-49d5-a277-c33fcbf983cd-kube-api-access-ndsk2\") pod \"certified-operators-qkpg5\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:52 crc kubenswrapper[4872]: I0203 07:28:52.628101 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:28:53 crc kubenswrapper[4872]: I0203 07:28:53.229772 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qkpg5"] Feb 03 07:28:53 crc kubenswrapper[4872]: I0203 07:28:53.580763 4872 generic.go:334] "Generic (PLEG): container finished" podID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerID="aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c" exitCode=0 Feb 03 07:28:53 crc kubenswrapper[4872]: I0203 07:28:53.580821 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkpg5" event={"ID":"87b10977-bdc4-49d5-a277-c33fcbf983cd","Type":"ContainerDied","Data":"aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c"} Feb 03 07:28:53 crc kubenswrapper[4872]: I0203 07:28:53.580943 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkpg5" event={"ID":"87b10977-bdc4-49d5-a277-c33fcbf983cd","Type":"ContainerStarted","Data":"0b9f41718226ecd3ec588c6235f329d927f37ba080fcda29a50f008e90ebe72f"} Feb 03 07:28:53 crc kubenswrapper[4872]: I0203 07:28:53.583970 4872 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.685988 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p7s"] Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.688589 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.704845 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p7s"] Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.771408 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brxjp\" (UniqueName: \"kubernetes.io/projected/383ab314-db5f-4068-a9c2-b0790505a287-kube-api-access-brxjp\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.771612 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-utilities\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.772031 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-catalog-content\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.873020 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-catalog-content\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.873096 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brxjp\" (UniqueName: \"kubernetes.io/projected/383ab314-db5f-4068-a9c2-b0790505a287-kube-api-access-brxjp\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.873142 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-utilities\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.873637 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-utilities\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.873878 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-catalog-content\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:54 crc kubenswrapper[4872]: I0203 07:28:54.911616 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brxjp\" (UniqueName: \"kubernetes.io/projected/383ab314-db5f-4068-a9c2-b0790505a287-kube-api-access-brxjp\") pod \"redhat-marketplace-k7p7s\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:55 crc kubenswrapper[4872]: I0203 07:28:55.011628 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:28:55 crc kubenswrapper[4872]: I0203 07:28:55.463231 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p7s"] Feb 03 07:28:55 crc kubenswrapper[4872]: W0203 07:28:55.471366 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod383ab314_db5f_4068_a9c2_b0790505a287.slice/crio-62b2ddfe8a9ec4657c9a70b4229e4da5228b16035915aa635bcc248070611c8d WatchSource:0}: Error finding container 62b2ddfe8a9ec4657c9a70b4229e4da5228b16035915aa635bcc248070611c8d: Status 404 returned error can't find the container with id 62b2ddfe8a9ec4657c9a70b4229e4da5228b16035915aa635bcc248070611c8d Feb 03 07:28:55 crc kubenswrapper[4872]: I0203 07:28:55.600551 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p7s" event={"ID":"383ab314-db5f-4068-a9c2-b0790505a287","Type":"ContainerStarted","Data":"62b2ddfe8a9ec4657c9a70b4229e4da5228b16035915aa635bcc248070611c8d"} Feb 03 07:28:55 crc kubenswrapper[4872]: I0203 07:28:55.605086 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkpg5" event={"ID":"87b10977-bdc4-49d5-a277-c33fcbf983cd","Type":"ContainerStarted","Data":"8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec"} Feb 03 07:28:56 crc kubenswrapper[4872]: I0203 07:28:56.617023 4872 generic.go:334] "Generic (PLEG): container finished" podID="383ab314-db5f-4068-a9c2-b0790505a287" containerID="28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037" exitCode=0 Feb 03 07:28:56 crc kubenswrapper[4872]: I0203 07:28:56.617149 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p7s" event={"ID":"383ab314-db5f-4068-a9c2-b0790505a287","Type":"ContainerDied","Data":"28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037"} Feb 03 07:28:56 crc kubenswrapper[4872]: I0203 07:28:56.622151 4872 generic.go:334] "Generic (PLEG): container finished" podID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerID="8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec" exitCode=0 Feb 03 07:28:56 crc kubenswrapper[4872]: I0203 07:28:56.622240 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkpg5" event={"ID":"87b10977-bdc4-49d5-a277-c33fcbf983cd","Type":"ContainerDied","Data":"8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec"} Feb 03 07:28:57 crc kubenswrapper[4872]: I0203 07:28:57.631269 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p7s" event={"ID":"383ab314-db5f-4068-a9c2-b0790505a287","Type":"ContainerStarted","Data":"b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c"} Feb 03 07:28:57 crc kubenswrapper[4872]: I0203 07:28:57.634763 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkpg5" event={"ID":"87b10977-bdc4-49d5-a277-c33fcbf983cd","Type":"ContainerStarted","Data":"b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00"} Feb 03 07:28:57 crc kubenswrapper[4872]: I0203 07:28:57.677157 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qkpg5" podStartSLOduration=2.180179624 podStartE2EDuration="5.677127511s" podCreationTimestamp="2026-02-03 07:28:52 +0000 UTC" firstStartedPulling="2026-02-03 07:28:53.583623221 +0000 UTC m=+5304.166314645" lastFinishedPulling="2026-02-03 07:28:57.080571118 +0000 UTC m=+5307.663262532" observedRunningTime="2026-02-03 07:28:57.672678443 +0000 UTC m=+5308.255369857" watchObservedRunningTime="2026-02-03 07:28:57.677127511 +0000 UTC m=+5308.259818965" Feb 03 07:28:58 crc kubenswrapper[4872]: I0203 07:28:58.644982 4872 generic.go:334] "Generic (PLEG): container finished" podID="383ab314-db5f-4068-a9c2-b0790505a287" containerID="b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c" exitCode=0 Feb 03 07:28:58 crc kubenswrapper[4872]: I0203 07:28:58.645054 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p7s" event={"ID":"383ab314-db5f-4068-a9c2-b0790505a287","Type":"ContainerDied","Data":"b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c"} Feb 03 07:28:59 crc kubenswrapper[4872]: I0203 07:28:59.660618 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p7s" event={"ID":"383ab314-db5f-4068-a9c2-b0790505a287","Type":"ContainerStarted","Data":"fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13"} Feb 03 07:28:59 crc kubenswrapper[4872]: I0203 07:28:59.693147 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7p7s" podStartSLOduration=2.955358211 podStartE2EDuration="5.69311529s" podCreationTimestamp="2026-02-03 07:28:54 +0000 UTC" firstStartedPulling="2026-02-03 07:28:56.619303168 +0000 UTC m=+5307.201994572" lastFinishedPulling="2026-02-03 07:28:59.357060197 +0000 UTC m=+5309.939751651" observedRunningTime="2026-02-03 07:28:59.678378703 +0000 UTC m=+5310.261070127" watchObservedRunningTime="2026-02-03 07:28:59.69311529 +0000 UTC m=+5310.275806734" Feb 03 07:29:02 crc kubenswrapper[4872]: I0203 07:29:02.628926 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:29:02 crc kubenswrapper[4872]: I0203 07:29:02.629224 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:29:02 crc kubenswrapper[4872]: I0203 07:29:02.678322 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:29:02 crc kubenswrapper[4872]: I0203 07:29:02.737550 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:29:03 crc kubenswrapper[4872]: I0203 07:29:03.080419 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qkpg5"] Feb 03 07:29:04 crc kubenswrapper[4872]: I0203 07:29:04.708404 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qkpg5" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerName="registry-server" containerID="cri-o://b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00" gracePeriod=2 Feb 03 07:29:04 crc kubenswrapper[4872]: E0203 07:29:04.912461 4872 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87b10977_bdc4_49d5_a277_c33fcbf983cd.slice/crio-b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00.scope\": RecentStats: unable to find data in memory cache]" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.012109 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.012158 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.065826 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.169068 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.289018 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-catalog-content\") pod \"87b10977-bdc4-49d5-a277-c33fcbf983cd\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.289422 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-utilities\") pod \"87b10977-bdc4-49d5-a277-c33fcbf983cd\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.289565 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndsk2\" (UniqueName: \"kubernetes.io/projected/87b10977-bdc4-49d5-a277-c33fcbf983cd-kube-api-access-ndsk2\") pod \"87b10977-bdc4-49d5-a277-c33fcbf983cd\" (UID: \"87b10977-bdc4-49d5-a277-c33fcbf983cd\") " Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.290977 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-utilities" (OuterVolumeSpecName: "utilities") pod "87b10977-bdc4-49d5-a277-c33fcbf983cd" (UID: "87b10977-bdc4-49d5-a277-c33fcbf983cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.298442 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b10977-bdc4-49d5-a277-c33fcbf983cd-kube-api-access-ndsk2" (OuterVolumeSpecName: "kube-api-access-ndsk2") pod "87b10977-bdc4-49d5-a277-c33fcbf983cd" (UID: "87b10977-bdc4-49d5-a277-c33fcbf983cd"). InnerVolumeSpecName "kube-api-access-ndsk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.344278 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87b10977-bdc4-49d5-a277-c33fcbf983cd" (UID: "87b10977-bdc4-49d5-a277-c33fcbf983cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.392140 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndsk2\" (UniqueName: \"kubernetes.io/projected/87b10977-bdc4-49d5-a277-c33fcbf983cd-kube-api-access-ndsk2\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.392182 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.392193 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b10977-bdc4-49d5-a277-c33fcbf983cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.725641 4872 generic.go:334] "Generic (PLEG): container finished" podID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerID="b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00" exitCode=0 Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.725761 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkpg5" event={"ID":"87b10977-bdc4-49d5-a277-c33fcbf983cd","Type":"ContainerDied","Data":"b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00"} Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.725794 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qkpg5" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.725835 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qkpg5" event={"ID":"87b10977-bdc4-49d5-a277-c33fcbf983cd","Type":"ContainerDied","Data":"0b9f41718226ecd3ec588c6235f329d927f37ba080fcda29a50f008e90ebe72f"} Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.725867 4872 scope.go:117] "RemoveContainer" containerID="b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.767943 4872 scope.go:117] "RemoveContainer" containerID="8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.778652 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qkpg5"] Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.795748 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qkpg5"] Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.798914 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.805212 4872 scope.go:117] "RemoveContainer" containerID="aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.847359 4872 scope.go:117] "RemoveContainer" containerID="b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00" Feb 03 07:29:05 crc kubenswrapper[4872]: E0203 07:29:05.847805 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00\": container with ID starting with b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00 not found: ID does not exist" containerID="b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.847858 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00"} err="failed to get container status \"b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00\": rpc error: code = NotFound desc = could not find container \"b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00\": container with ID starting with b968ad0668ee1987d8cdfad04029797e09a57373ca209e0990c4fedc91119a00 not found: ID does not exist" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.847884 4872 scope.go:117] "RemoveContainer" containerID="8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec" Feb 03 07:29:05 crc kubenswrapper[4872]: E0203 07:29:05.848177 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec\": container with ID starting with 8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec not found: ID does not exist" containerID="8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.848197 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec"} err="failed to get container status \"8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec\": rpc error: code = NotFound desc = could not find container \"8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec\": container with ID starting with 8d1d7cac60d22e6750768a2dababbf3ae6997ce0ff071b1782a78e526e73b9ec not found: ID does not exist" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.848208 4872 scope.go:117] "RemoveContainer" containerID="aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c" Feb 03 07:29:05 crc kubenswrapper[4872]: E0203 07:29:05.848408 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c\": container with ID starting with aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c not found: ID does not exist" containerID="aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c" Feb 03 07:29:05 crc kubenswrapper[4872]: I0203 07:29:05.848443 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c"} err="failed to get container status \"aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c\": rpc error: code = NotFound desc = could not find container \"aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c\": container with ID starting with aa1033ec68ca12cdd51d4912f10b7f2b7dea7d9112fa9deb9414cf741cc42f2c not found: ID does not exist" Feb 03 07:29:06 crc kubenswrapper[4872]: I0203 07:29:06.140123 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" path="/var/lib/kubelet/pods/87b10977-bdc4-49d5-a277-c33fcbf983cd/volumes" Feb 03 07:29:07 crc kubenswrapper[4872]: I0203 07:29:07.480173 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p7s"] Feb 03 07:29:07 crc kubenswrapper[4872]: I0203 07:29:07.746178 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7p7s" podUID="383ab314-db5f-4068-a9c2-b0790505a287" containerName="registry-server" containerID="cri-o://fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13" gracePeriod=2 Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.279552 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.350026 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-utilities\") pod \"383ab314-db5f-4068-a9c2-b0790505a287\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.350064 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-catalog-content\") pod \"383ab314-db5f-4068-a9c2-b0790505a287\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.350112 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brxjp\" (UniqueName: \"kubernetes.io/projected/383ab314-db5f-4068-a9c2-b0790505a287-kube-api-access-brxjp\") pod \"383ab314-db5f-4068-a9c2-b0790505a287\" (UID: \"383ab314-db5f-4068-a9c2-b0790505a287\") " Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.356233 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-utilities" (OuterVolumeSpecName: "utilities") pod "383ab314-db5f-4068-a9c2-b0790505a287" (UID: "383ab314-db5f-4068-a9c2-b0790505a287"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.359421 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383ab314-db5f-4068-a9c2-b0790505a287-kube-api-access-brxjp" (OuterVolumeSpecName: "kube-api-access-brxjp") pod "383ab314-db5f-4068-a9c2-b0790505a287" (UID: "383ab314-db5f-4068-a9c2-b0790505a287"). InnerVolumeSpecName "kube-api-access-brxjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.381613 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "383ab314-db5f-4068-a9c2-b0790505a287" (UID: "383ab314-db5f-4068-a9c2-b0790505a287"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.453002 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.453034 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383ab314-db5f-4068-a9c2-b0790505a287-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.453046 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brxjp\" (UniqueName: \"kubernetes.io/projected/383ab314-db5f-4068-a9c2-b0790505a287-kube-api-access-brxjp\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.757235 4872 generic.go:334] "Generic (PLEG): container finished" podID="383ab314-db5f-4068-a9c2-b0790505a287" containerID="fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13" exitCode=0 Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.757310 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7p7s" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.757333 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p7s" event={"ID":"383ab314-db5f-4068-a9c2-b0790505a287","Type":"ContainerDied","Data":"fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13"} Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.758310 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7p7s" event={"ID":"383ab314-db5f-4068-a9c2-b0790505a287","Type":"ContainerDied","Data":"62b2ddfe8a9ec4657c9a70b4229e4da5228b16035915aa635bcc248070611c8d"} Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.758339 4872 scope.go:117] "RemoveContainer" containerID="fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.779294 4872 scope.go:117] "RemoveContainer" containerID="b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.837368 4872 scope.go:117] "RemoveContainer" containerID="28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.844146 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p7s"] Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.854177 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7p7s"] Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.865256 4872 scope.go:117] "RemoveContainer" containerID="fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13" Feb 03 07:29:08 crc kubenswrapper[4872]: E0203 07:29:08.865759 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13\": container with ID starting with fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13 not found: ID does not exist" containerID="fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.865796 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13"} err="failed to get container status \"fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13\": rpc error: code = NotFound desc = could not find container \"fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13\": container with ID starting with fc25a1e6b2a9af55e59d308395c12ae850e80650462be2180b264be8371a3e13 not found: ID does not exist" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.865824 4872 scope.go:117] "RemoveContainer" containerID="b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c" Feb 03 07:29:08 crc kubenswrapper[4872]: E0203 07:29:08.866199 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c\": container with ID starting with b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c not found: ID does not exist" containerID="b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.866246 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c"} err="failed to get container status \"b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c\": rpc error: code = NotFound desc = could not find container \"b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c\": container with ID starting with b5860d2647b6c34ab134eff8b67b3af5673630c48df02b4e484eb87e536f553c not found: ID does not exist" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.866292 4872 scope.go:117] "RemoveContainer" containerID="28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037" Feb 03 07:29:08 crc kubenswrapper[4872]: E0203 07:29:08.866575 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037\": container with ID starting with 28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037 not found: ID does not exist" containerID="28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037" Feb 03 07:29:08 crc kubenswrapper[4872]: I0203 07:29:08.866607 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037"} err="failed to get container status \"28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037\": rpc error: code = NotFound desc = could not find container \"28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037\": container with ID starting with 28689608a4edb262f7f249736a47ccc354aec1f94f828121760a485a66969037 not found: ID does not exist" Feb 03 07:29:10 crc kubenswrapper[4872]: I0203 07:29:10.134290 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383ab314-db5f-4068-a9c2-b0790505a287" path="/var/lib/kubelet/pods/383ab314-db5f-4068-a9c2-b0790505a287/volumes" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.284816 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mvlsx"] Feb 03 07:29:20 crc kubenswrapper[4872]: E0203 07:29:20.285657 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383ab314-db5f-4068-a9c2-b0790505a287" containerName="extract-content" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.285672 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="383ab314-db5f-4068-a9c2-b0790505a287" containerName="extract-content" Feb 03 07:29:20 crc kubenswrapper[4872]: E0203 07:29:20.293257 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383ab314-db5f-4068-a9c2-b0790505a287" containerName="extract-utilities" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.293287 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="383ab314-db5f-4068-a9c2-b0790505a287" containerName="extract-utilities" Feb 03 07:29:20 crc kubenswrapper[4872]: E0203 07:29:20.293318 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerName="extract-utilities" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.293327 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerName="extract-utilities" Feb 03 07:29:20 crc kubenswrapper[4872]: E0203 07:29:20.293363 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383ab314-db5f-4068-a9c2-b0790505a287" containerName="registry-server" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.293375 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="383ab314-db5f-4068-a9c2-b0790505a287" containerName="registry-server" Feb 03 07:29:20 crc kubenswrapper[4872]: E0203 07:29:20.293391 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerName="extract-content" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.293399 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerName="extract-content" Feb 03 07:29:20 crc kubenswrapper[4872]: E0203 07:29:20.293430 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerName="registry-server" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.293438 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerName="registry-server" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.293819 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b10977-bdc4-49d5-a277-c33fcbf983cd" containerName="registry-server" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.293866 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="383ab314-db5f-4068-a9c2-b0790505a287" containerName="registry-server" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.295619 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.300243 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvlsx"] Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.398451 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-catalog-content\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.398586 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f78g\" (UniqueName: \"kubernetes.io/projected/e8ad9fb5-24c7-4a8d-b526-ebd105203105-kube-api-access-6f78g\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.398791 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-utilities\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.505268 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-utilities\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.505698 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-catalog-content\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.505817 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f78g\" (UniqueName: \"kubernetes.io/projected/e8ad9fb5-24c7-4a8d-b526-ebd105203105-kube-api-access-6f78g\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.505938 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-catalog-content\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.506077 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-utilities\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.530795 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f78g\" (UniqueName: \"kubernetes.io/projected/e8ad9fb5-24c7-4a8d-b526-ebd105203105-kube-api-access-6f78g\") pod \"community-operators-mvlsx\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:20 crc kubenswrapper[4872]: I0203 07:29:20.611148 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:21 crc kubenswrapper[4872]: I0203 07:29:21.262665 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvlsx"] Feb 03 07:29:21 crc kubenswrapper[4872]: I0203 07:29:21.879616 4872 generic.go:334] "Generic (PLEG): container finished" podID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerID="157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d" exitCode=0 Feb 03 07:29:21 crc kubenswrapper[4872]: I0203 07:29:21.879659 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlsx" event={"ID":"e8ad9fb5-24c7-4a8d-b526-ebd105203105","Type":"ContainerDied","Data":"157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d"} Feb 03 07:29:21 crc kubenswrapper[4872]: I0203 07:29:21.879700 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlsx" event={"ID":"e8ad9fb5-24c7-4a8d-b526-ebd105203105","Type":"ContainerStarted","Data":"1bdad29443101a38537bf2a345a341b6465aed80cd09fdaee6dad084af6fa6ef"} Feb 03 07:29:23 crc kubenswrapper[4872]: I0203 07:29:23.897996 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlsx" event={"ID":"e8ad9fb5-24c7-4a8d-b526-ebd105203105","Type":"ContainerStarted","Data":"4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d"} Feb 03 07:29:26 crc kubenswrapper[4872]: I0203 07:29:26.925711 4872 generic.go:334] "Generic (PLEG): container finished" podID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerID="4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d" exitCode=0 Feb 03 07:29:26 crc kubenswrapper[4872]: I0203 07:29:26.925770 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlsx" event={"ID":"e8ad9fb5-24c7-4a8d-b526-ebd105203105","Type":"ContainerDied","Data":"4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d"} Feb 03 07:29:29 crc kubenswrapper[4872]: I0203 07:29:29.960381 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlsx" event={"ID":"e8ad9fb5-24c7-4a8d-b526-ebd105203105","Type":"ContainerStarted","Data":"249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6"} Feb 03 07:29:30 crc kubenswrapper[4872]: I0203 07:29:30.612400 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:30 crc kubenswrapper[4872]: I0203 07:29:30.612469 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:31 crc kubenswrapper[4872]: I0203 07:29:31.665829 4872 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mvlsx" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="registry-server" probeResult="failure" output=< Feb 03 07:29:31 crc kubenswrapper[4872]: timeout: failed to connect service ":50051" within 1s Feb 03 07:29:31 crc kubenswrapper[4872]: > Feb 03 07:29:40 crc kubenswrapper[4872]: I0203 07:29:40.709872 4872 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:40 crc kubenswrapper[4872]: I0203 07:29:40.737821 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mvlsx" podStartSLOduration=14.200920111 podStartE2EDuration="20.737803611s" podCreationTimestamp="2026-02-03 07:29:20 +0000 UTC" firstStartedPulling="2026-02-03 07:29:21.881256279 +0000 UTC m=+5332.463947693" lastFinishedPulling="2026-02-03 07:29:28.418139769 +0000 UTC m=+5339.000831193" observedRunningTime="2026-02-03 07:29:29.986072837 +0000 UTC m=+5340.568764261" watchObservedRunningTime="2026-02-03 07:29:40.737803611 +0000 UTC m=+5351.320495025" Feb 03 07:29:40 crc kubenswrapper[4872]: I0203 07:29:40.760023 4872 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:40 crc kubenswrapper[4872]: I0203 07:29:40.949097 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvlsx"] Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.070383 4872 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mvlsx" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="registry-server" containerID="cri-o://249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6" gracePeriod=2 Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.539777 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.679375 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-utilities\") pod \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.679449 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f78g\" (UniqueName: \"kubernetes.io/projected/e8ad9fb5-24c7-4a8d-b526-ebd105203105-kube-api-access-6f78g\") pod \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.679541 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-catalog-content\") pod \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\" (UID: \"e8ad9fb5-24c7-4a8d-b526-ebd105203105\") " Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.680897 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-utilities" (OuterVolumeSpecName: "utilities") pod "e8ad9fb5-24c7-4a8d-b526-ebd105203105" (UID: "e8ad9fb5-24c7-4a8d-b526-ebd105203105"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.686036 4872 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.686432 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ad9fb5-24c7-4a8d-b526-ebd105203105-kube-api-access-6f78g" (OuterVolumeSpecName: "kube-api-access-6f78g") pod "e8ad9fb5-24c7-4a8d-b526-ebd105203105" (UID: "e8ad9fb5-24c7-4a8d-b526-ebd105203105"). InnerVolumeSpecName "kube-api-access-6f78g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.751152 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8ad9fb5-24c7-4a8d-b526-ebd105203105" (UID: "e8ad9fb5-24c7-4a8d-b526-ebd105203105"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.788143 4872 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ad9fb5-24c7-4a8d-b526-ebd105203105-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:42 crc kubenswrapper[4872]: I0203 07:29:42.788352 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f78g\" (UniqueName: \"kubernetes.io/projected/e8ad9fb5-24c7-4a8d-b526-ebd105203105-kube-api-access-6f78g\") on node \"crc\" DevicePath \"\"" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.093237 4872 generic.go:334] "Generic (PLEG): container finished" podID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerID="249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6" exitCode=0 Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.095449 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlsx" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.095532 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlsx" event={"ID":"e8ad9fb5-24c7-4a8d-b526-ebd105203105","Type":"ContainerDied","Data":"249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6"} Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.095602 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlsx" event={"ID":"e8ad9fb5-24c7-4a8d-b526-ebd105203105","Type":"ContainerDied","Data":"1bdad29443101a38537bf2a345a341b6465aed80cd09fdaee6dad084af6fa6ef"} Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.095625 4872 scope.go:117] "RemoveContainer" containerID="249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.119291 4872 scope.go:117] "RemoveContainer" containerID="4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.150939 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvlsx"] Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.151225 4872 scope.go:117] "RemoveContainer" containerID="157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.164735 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mvlsx"] Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.193152 4872 scope.go:117] "RemoveContainer" containerID="249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6" Feb 03 07:29:43 crc kubenswrapper[4872]: E0203 07:29:43.193500 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6\": container with ID starting with 249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6 not found: ID does not exist" containerID="249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.193528 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6"} err="failed to get container status \"249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6\": rpc error: code = NotFound desc = could not find container \"249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6\": container with ID starting with 249168e307b01dfa6410f5bd2c4a1d805a46ae9ce6fd0725da1ff8497ac8aeb6 not found: ID does not exist" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.193548 4872 scope.go:117] "RemoveContainer" containerID="4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d" Feb 03 07:29:43 crc kubenswrapper[4872]: E0203 07:29:43.194010 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d\": container with ID starting with 4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d not found: ID does not exist" containerID="4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.194058 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d"} err="failed to get container status \"4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d\": rpc error: code = NotFound desc = could not find container \"4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d\": container with ID starting with 4082c7b4cd4487ffaf3ffaad2daf3d5b3480543b52559faf926c49da21d31f0d not found: ID does not exist" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.194085 4872 scope.go:117] "RemoveContainer" containerID="157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d" Feb 03 07:29:43 crc kubenswrapper[4872]: E0203 07:29:43.194531 4872 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d\": container with ID starting with 157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d not found: ID does not exist" containerID="157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d" Feb 03 07:29:43 crc kubenswrapper[4872]: I0203 07:29:43.194611 4872 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d"} err="failed to get container status \"157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d\": rpc error: code = NotFound desc = could not find container \"157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d\": container with ID starting with 157c0f4264e3ecfacea1c070f6a75182a46504a0caebaa35d5be53465ae32e6d not found: ID does not exist" Feb 03 07:29:44 crc kubenswrapper[4872]: I0203 07:29:44.133042 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" path="/var/lib/kubelet/pods/e8ad9fb5-24c7-4a8d-b526-ebd105203105/volumes" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.233853 4872 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f"] Feb 03 07:30:00 crc kubenswrapper[4872]: E0203 07:30:00.236393 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="registry-server" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.236505 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="registry-server" Feb 03 07:30:00 crc kubenswrapper[4872]: E0203 07:30:00.236597 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="extract-utilities" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.236667 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="extract-utilities" Feb 03 07:30:00 crc kubenswrapper[4872]: E0203 07:30:00.236803 4872 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="extract-content" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.236879 4872 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="extract-content" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.237160 4872 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ad9fb5-24c7-4a8d-b526-ebd105203105" containerName="registry-server" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.237993 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.240967 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f"] Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.243058 4872 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.243143 4872 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.329006 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f689b6f8-1870-49d3-8076-8b14abc149f9-config-volume\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.329116 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f689b6f8-1870-49d3-8076-8b14abc149f9-secret-volume\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.329156 4872 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbx86\" (UniqueName: \"kubernetes.io/projected/f689b6f8-1870-49d3-8076-8b14abc149f9-kube-api-access-tbx86\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.431645 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f689b6f8-1870-49d3-8076-8b14abc149f9-secret-volume\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.431903 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbx86\" (UniqueName: \"kubernetes.io/projected/f689b6f8-1870-49d3-8076-8b14abc149f9-kube-api-access-tbx86\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.432162 4872 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f689b6f8-1870-49d3-8076-8b14abc149f9-config-volume\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.433067 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f689b6f8-1870-49d3-8076-8b14abc149f9-config-volume\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.438539 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f689b6f8-1870-49d3-8076-8b14abc149f9-secret-volume\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.461830 4872 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbx86\" (UniqueName: \"kubernetes.io/projected/f689b6f8-1870-49d3-8076-8b14abc149f9-kube-api-access-tbx86\") pod \"collect-profiles-29501730-jfl8f\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:00 crc kubenswrapper[4872]: I0203 07:30:00.573286 4872 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:01 crc kubenswrapper[4872]: I0203 07:30:01.058631 4872 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f"] Feb 03 07:30:01 crc kubenswrapper[4872]: W0203 07:30:01.068918 4872 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf689b6f8_1870_49d3_8076_8b14abc149f9.slice/crio-847acfbefa36dba5e253d655454521dc7e61ab8e23ccaa52a949e0d2d36cc424 WatchSource:0}: Error finding container 847acfbefa36dba5e253d655454521dc7e61ab8e23ccaa52a949e0d2d36cc424: Status 404 returned error can't find the container with id 847acfbefa36dba5e253d655454521dc7e61ab8e23ccaa52a949e0d2d36cc424 Feb 03 07:30:01 crc kubenswrapper[4872]: I0203 07:30:01.271618 4872 patch_prober.go:28] interesting pod/machine-config-daemon-hgxbn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 07:30:01 crc kubenswrapper[4872]: I0203 07:30:01.271880 4872 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hgxbn" podUID="05d05db4-da7f-4f2f-9025-672aefab2d16" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 07:30:01 crc kubenswrapper[4872]: I0203 07:30:01.273954 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" event={"ID":"f689b6f8-1870-49d3-8076-8b14abc149f9","Type":"ContainerStarted","Data":"a943b35523916ac4bb2e6bdec6911bb1bc777c425d75bea8be50b74a13f41113"} Feb 03 07:30:01 crc kubenswrapper[4872]: I0203 07:30:01.274036 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" event={"ID":"f689b6f8-1870-49d3-8076-8b14abc149f9","Type":"ContainerStarted","Data":"847acfbefa36dba5e253d655454521dc7e61ab8e23ccaa52a949e0d2d36cc424"} Feb 03 07:30:01 crc kubenswrapper[4872]: I0203 07:30:01.297532 4872 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" podStartSLOduration=1.297492582 podStartE2EDuration="1.297492582s" podCreationTimestamp="2026-02-03 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 07:30:01.289449636 +0000 UTC m=+5371.872141050" watchObservedRunningTime="2026-02-03 07:30:01.297492582 +0000 UTC m=+5371.880184016" Feb 03 07:30:02 crc kubenswrapper[4872]: I0203 07:30:02.286641 4872 generic.go:334] "Generic (PLEG): container finished" podID="f689b6f8-1870-49d3-8076-8b14abc149f9" containerID="a943b35523916ac4bb2e6bdec6911bb1bc777c425d75bea8be50b74a13f41113" exitCode=0 Feb 03 07:30:02 crc kubenswrapper[4872]: I0203 07:30:02.286770 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" event={"ID":"f689b6f8-1870-49d3-8076-8b14abc149f9","Type":"ContainerDied","Data":"a943b35523916ac4bb2e6bdec6911bb1bc777c425d75bea8be50b74a13f41113"} Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.600038 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.700110 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f689b6f8-1870-49d3-8076-8b14abc149f9-secret-volume\") pod \"f689b6f8-1870-49d3-8076-8b14abc149f9\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.700354 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbx86\" (UniqueName: \"kubernetes.io/projected/f689b6f8-1870-49d3-8076-8b14abc149f9-kube-api-access-tbx86\") pod \"f689b6f8-1870-49d3-8076-8b14abc149f9\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.700453 4872 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f689b6f8-1870-49d3-8076-8b14abc149f9-config-volume\") pod \"f689b6f8-1870-49d3-8076-8b14abc149f9\" (UID: \"f689b6f8-1870-49d3-8076-8b14abc149f9\") " Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.700938 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f689b6f8-1870-49d3-8076-8b14abc149f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "f689b6f8-1870-49d3-8076-8b14abc149f9" (UID: "f689b6f8-1870-49d3-8076-8b14abc149f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.705837 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f689b6f8-1870-49d3-8076-8b14abc149f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f689b6f8-1870-49d3-8076-8b14abc149f9" (UID: "f689b6f8-1870-49d3-8076-8b14abc149f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.706728 4872 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f689b6f8-1870-49d3-8076-8b14abc149f9-kube-api-access-tbx86" (OuterVolumeSpecName: "kube-api-access-tbx86") pod "f689b6f8-1870-49d3-8076-8b14abc149f9" (UID: "f689b6f8-1870-49d3-8076-8b14abc149f9"). InnerVolumeSpecName "kube-api-access-tbx86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.802620 4872 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f689b6f8-1870-49d3-8076-8b14abc149f9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.802658 4872 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f689b6f8-1870-49d3-8076-8b14abc149f9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 07:30:03 crc kubenswrapper[4872]: I0203 07:30:03.802673 4872 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbx86\" (UniqueName: \"kubernetes.io/projected/f689b6f8-1870-49d3-8076-8b14abc149f9-kube-api-access-tbx86\") on node \"crc\" DevicePath \"\"" Feb 03 07:30:04 crc kubenswrapper[4872]: I0203 07:30:04.303475 4872 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" event={"ID":"f689b6f8-1870-49d3-8076-8b14abc149f9","Type":"ContainerDied","Data":"847acfbefa36dba5e253d655454521dc7e61ab8e23ccaa52a949e0d2d36cc424"} Feb 03 07:30:04 crc kubenswrapper[4872]: I0203 07:30:04.303507 4872 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="847acfbefa36dba5e253d655454521dc7e61ab8e23ccaa52a949e0d2d36cc424" Feb 03 07:30:04 crc kubenswrapper[4872]: I0203 07:30:04.303523 4872 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29501730-jfl8f" Feb 03 07:30:04 crc kubenswrapper[4872]: I0203 07:30:04.388093 4872 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v"] Feb 03 07:30:04 crc kubenswrapper[4872]: I0203 07:30:04.406256 4872 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29501685-8vq6v"] Feb 03 07:30:06 crc kubenswrapper[4872]: I0203 07:30:06.143957 4872 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9394df01-098e-42e4-8f6e-47e156fc07ad" path="/var/lib/kubelet/pods/9394df01-098e-42e4-8f6e-47e156fc07ad/volumes"